Test Report: Docker_Linux_containerd_arm64 17488

                    
                      292152b7ba2fff47063f7712cda18987a57d80fb:2023-10-25:31605
                    
                

Test fail (8/308)

x
+
TestAddons/parallel/Ingress (37.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-624750 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-624750 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-624750 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4095c1aa-c792-4774-97ec-069bb4a76c95] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4095c1aa-c792-4774-97ec-069bb4a76c95] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.02495355s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-624750 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-624750 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:285: (dbg) Done: kubectl --context addons-624750 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.034551639s)
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-624750 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.061133703s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-624750 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-624750 addons disable ingress-dns --alsologtostderr -v=1: (1.376416263s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-624750 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-624750 addons disable ingress --alsologtostderr -v=1: (7.786576658s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-624750
helpers_test.go:235: (dbg) docker inspect addons-624750:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6a7a1eeafdf49d6e7b1a0acd80293fe882831a7e7ba236071b76e58ba7cb1e9",
	        "Created": "2023-10-25T21:41:36.775558143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-25T21:41:37.113965582Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5b0caed01db498fc255865f87f2d678d2b2e04ba0f7d056894d23da26cbc249a",
	        "ResolvConfPath": "/var/lib/docker/containers/f6a7a1eeafdf49d6e7b1a0acd80293fe882831a7e7ba236071b76e58ba7cb1e9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6a7a1eeafdf49d6e7b1a0acd80293fe882831a7e7ba236071b76e58ba7cb1e9/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6a7a1eeafdf49d6e7b1a0acd80293fe882831a7e7ba236071b76e58ba7cb1e9/hosts",
	        "LogPath": "/var/lib/docker/containers/f6a7a1eeafdf49d6e7b1a0acd80293fe882831a7e7ba236071b76e58ba7cb1e9/f6a7a1eeafdf49d6e7b1a0acd80293fe882831a7e7ba236071b76e58ba7cb1e9-json.log",
	        "Name": "/addons-624750",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-624750:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-624750",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4dd4a96272f1b64432e63eb40c63d476bfc9f0c14398c16f32ff9daa5712cc8c-init/diff:/var/lib/docker/overlay2/72a373cc1a648bd482c91a7d51c6d15fd52c6262ee2446bc4493d43e0c8c95ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4dd4a96272f1b64432e63eb40c63d476bfc9f0c14398c16f32ff9daa5712cc8c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4dd4a96272f1b64432e63eb40c63d476bfc9f0c14398c16f32ff9daa5712cc8c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4dd4a96272f1b64432e63eb40c63d476bfc9f0c14398c16f32ff9daa5712cc8c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-624750",
	                "Source": "/var/lib/docker/volumes/addons-624750/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-624750",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-624750",
	                "name.minikube.sigs.k8s.io": "addons-624750",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b8ea76d62f72a24e5fd2713a03b7b1c2f0f6df2cae278d2a50787d9acb8dd76",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9b8ea76d62f7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-624750": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f6a7a1eeafdf",
	                        "addons-624750"
	                    ],
	                    "NetworkID": "d339062511ca6831e6c37abe6e66c513648031c2b280aaa8e19fdda8a27acd1e",
	                    "EndpointID": "3f4b8a124871e51a3ed44b19b2bd60fc8dc93845305112606fcf39b0b1f8d73d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-624750 -n addons-624750
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-624750 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-624750 logs -n 25: (1.596845787s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.31.2 | 25 Oct 23 21:41 UTC | 25 Oct 23 21:41 UTC |
	| delete  | -p download-only-836857                                                                     | download-only-836857   | jenkins | v1.31.2 | 25 Oct 23 21:41 UTC | 25 Oct 23 21:41 UTC |
	| delete  | -p download-only-836857                                                                     | download-only-836857   | jenkins | v1.31.2 | 25 Oct 23 21:41 UTC | 25 Oct 23 21:41 UTC |
	| start   | --download-only -p                                                                          | download-docker-305055 | jenkins | v1.31.2 | 25 Oct 23 21:41 UTC |                     |
	|         | download-docker-305055                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-305055                                                                   | download-docker-305055 | jenkins | v1.31.2 | 25 Oct 23 21:41 UTC | 25 Oct 23 21:41 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-927332   | jenkins | v1.31.2 | 25 Oct 23 21:41 UTC |                     |
	|         | binary-mirror-927332                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33477                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-927332                                                                     | binary-mirror-927332   | jenkins | v1.31.2 | 25 Oct 23 21:41 UTC | 25 Oct 23 21:41 UTC |
	| addons  | enable dashboard -p                                                                         | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:41 UTC |                     |
	|         | addons-624750                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:41 UTC |                     |
	|         | addons-624750                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-624750 --wait=true                                                                | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:41 UTC | 25 Oct 23 21:43 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:43 UTC | 25 Oct 23 21:43 UTC |
	|         | -p addons-624750                                                                            |                        |         |         |                     |                     |
	| ip      | addons-624750 ip                                                                            | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:43 UTC | 25 Oct 23 21:43 UTC |
	| addons  | addons-624750 addons disable                                                                | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:43 UTC | 25 Oct 23 21:43 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-624750 ssh cat                                                                       | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:43 UTC | 25 Oct 23 21:43 UTC |
	|         | /opt/local-path-provisioner/pvc-be298c39-cf02-4c48-8430-ade38dd1c543_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-624750 addons disable                                                                | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:43 UTC | 25 Oct 23 21:44 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:43 UTC | 25 Oct 23 21:43 UTC |
	|         | addons-624750                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:43 UTC | 25 Oct 23 21:43 UTC |
	|         | -p addons-624750                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-624750 addons                                                                        | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:44 UTC | 25 Oct 23 21:44 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-624750 addons                                                                        | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:44 UTC | 25 Oct 23 21:44 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:44 UTC | 25 Oct 23 21:44 UTC |
	|         | addons-624750                                                                               |                        |         |         |                     |                     |
	| addons  | addons-624750 addons                                                                        | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:44 UTC | 25 Oct 23 21:44 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-624750 ssh curl -s                                                                   | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:44 UTC | 25 Oct 23 21:44 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-624750 ip                                                                            | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:45 UTC | 25 Oct 23 21:45 UTC |
	| addons  | addons-624750 addons disable                                                                | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:45 UTC | 25 Oct 23 21:45 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-624750 addons disable                                                                | addons-624750          | jenkins | v1.31.2 | 25 Oct 23 21:45 UTC | 25 Oct 23 21:45 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 21:41:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:41:12.077167  406952 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:41:12.077355  406952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:41:12.077387  406952 out.go:309] Setting ErrFile to fd 2...
	I1025 21:41:12.077406  406952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:41:12.077708  406952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
	I1025 21:41:12.078255  406952 out.go:303] Setting JSON to false
	I1025 21:41:12.079403  406952 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5009,"bootTime":1698265063,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 21:41:12.079523  406952 start.go:138] virtualization:  
	I1025 21:41:12.083030  406952 out.go:177] * [addons-624750] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1025 21:41:12.085204  406952 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:41:12.085330  406952 notify.go:220] Checking for updates...
	I1025 21:41:12.087062  406952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:41:12.088968  406952 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	I1025 21:41:12.090592  406952 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	I1025 21:41:12.092496  406952 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 21:41:12.094038  406952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:41:12.095912  406952 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:41:12.122677  406952 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:41:12.122809  406952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:41:12.213607  406952 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-25 21:41:12.203860363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 21:41:12.213719  406952 docker.go:295] overlay module found
	I1025 21:41:12.215714  406952 out.go:177] * Using the docker driver based on user configuration
	I1025 21:41:12.217374  406952 start.go:298] selected driver: docker
	I1025 21:41:12.217394  406952 start.go:902] validating driver "docker" against <nil>
	I1025 21:41:12.217407  406952 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:41:12.218186  406952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:41:12.286108  406952 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-25 21:41:12.276940234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 21:41:12.286267  406952 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 21:41:12.286498  406952 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:41:12.288417  406952 out.go:177] * Using Docker driver with root privileges
	I1025 21:41:12.289995  406952 cni.go:84] Creating CNI manager for ""
	I1025 21:41:12.290011  406952 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1025 21:41:12.290023  406952 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 21:41:12.290032  406952 start_flags.go:323] config:
	{Name:addons-624750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-624750 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:41:12.292012  406952 out.go:177] * Starting control plane node addons-624750 in cluster addons-624750
	I1025 21:41:12.293656  406952 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1025 21:41:12.295153  406952 out.go:177] * Pulling base image ...
	I1025 21:41:12.296705  406952 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1025 21:41:12.296741  406952 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 21:41:12.296756  406952 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4
	I1025 21:41:12.296764  406952 cache.go:56] Caching tarball of preloaded images
	I1025 21:41:12.296840  406952 preload.go:174] Found /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 21:41:12.296851  406952 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on containerd
	I1025 21:41:12.297224  406952 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/config.json ...
	I1025 21:41:12.297253  406952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/config.json: {Name:mk91c8f48c165d14ce0237b3fccdaed2d0884cc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:41:12.314806  406952 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 21:41:12.314915  406952 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1025 21:41:12.314940  406952 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1025 21:41:12.314950  406952 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1025 21:41:12.314958  406952 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1025 21:41:12.314963  406952 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from local cache
	I1025 21:41:28.050933  406952 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from cached tarball
	I1025 21:41:28.050976  406952 cache.go:194] Successfully downloaded all kic artifacts
	I1025 21:41:28.051030  406952 start.go:365] acquiring machines lock for addons-624750: {Name:mk9b7439f76ac82267e5342e8cae5b3e81d5a3f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:41:28.051490  406952 start.go:369] acquired machines lock for "addons-624750" in 434.172µs
	I1025 21:41:28.051528  406952 start.go:93] Provisioning new machine with config: &{Name:addons-624750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-624750 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1025 21:41:28.051623  406952 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:41:28.053673  406952 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1025 21:41:28.053950  406952 start.go:159] libmachine.API.Create for "addons-624750" (driver="docker")
	I1025 21:41:28.053986  406952 client.go:168] LocalClient.Create starting
	I1025 21:41:28.054104  406952 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem
	I1025 21:41:29.311501  406952 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/cert.pem
	I1025 21:41:30.102239  406952 cli_runner.go:164] Run: docker network inspect addons-624750 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:41:30.128907  406952 cli_runner.go:211] docker network inspect addons-624750 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:41:30.129033  406952 network_create.go:281] running [docker network inspect addons-624750] to gather additional debugging logs...
	I1025 21:41:30.129081  406952 cli_runner.go:164] Run: docker network inspect addons-624750
	W1025 21:41:30.150067  406952 cli_runner.go:211] docker network inspect addons-624750 returned with exit code 1
	I1025 21:41:30.150112  406952 network_create.go:284] error running [docker network inspect addons-624750]: docker network inspect addons-624750: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-624750 not found
	I1025 21:41:30.150128  406952 network_create.go:286] output of [docker network inspect addons-624750]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-624750 not found
	
	** /stderr **
	I1025 21:41:30.150276  406952 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:41:30.171275  406952 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002544830}
	I1025 21:41:30.171325  406952 network_create.go:124] attempt to create docker network addons-624750 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:41:30.171412  406952 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-624750 addons-624750
	I1025 21:41:30.256013  406952 network_create.go:108] docker network addons-624750 192.168.49.0/24 created
	I1025 21:41:30.256048  406952 kic.go:118] calculated static IP "192.168.49.2" for the "addons-624750" container
	I1025 21:41:30.256134  406952 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:41:30.273287  406952 cli_runner.go:164] Run: docker volume create addons-624750 --label name.minikube.sigs.k8s.io=addons-624750 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:41:30.293590  406952 oci.go:103] Successfully created a docker volume addons-624750
	I1025 21:41:30.293682  406952 cli_runner.go:164] Run: docker run --rm --name addons-624750-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-624750 --entrypoint /usr/bin/test -v addons-624750:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 21:41:32.422420  406952 cli_runner.go:217] Completed: docker run --rm --name addons-624750-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-624750 --entrypoint /usr/bin/test -v addons-624750:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (2.128687242s)
	I1025 21:41:32.422467  406952 oci.go:107] Successfully prepared a docker volume addons-624750
	I1025 21:41:32.422489  406952 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1025 21:41:32.422514  406952 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 21:41:32.422599  406952 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-624750:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 21:41:36.694374  406952 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-624750:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (4.271718164s)
	I1025 21:41:36.694408  406952 kic.go:200] duration metric: took 4.271890 seconds to extract preloaded images to volume
	W1025 21:41:36.694554  406952 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 21:41:36.694682  406952 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 21:41:36.759373  406952 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-624750 --name addons-624750 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-624750 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-624750 --network addons-624750 --ip 192.168.49.2 --volume addons-624750:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 21:41:37.121921  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Running}}
	I1025 21:41:37.152754  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:41:37.174895  406952 cli_runner.go:164] Run: docker exec addons-624750 stat /var/lib/dpkg/alternatives/iptables
	I1025 21:41:37.258337  406952 oci.go:144] the created container "addons-624750" has a running status.
	I1025 21:41:37.258367  406952 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa...
	I1025 21:41:37.604883  406952 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 21:41:37.631149  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:41:37.660451  406952 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 21:41:37.660479  406952 kic_runner.go:114] Args: [docker exec --privileged addons-624750 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 21:41:37.738029  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:41:37.768960  406952 machine.go:88] provisioning docker machine ...
	I1025 21:41:37.768995  406952 ubuntu.go:169] provisioning hostname "addons-624750"
	I1025 21:41:37.769201  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:41:37.797894  406952 main.go:141] libmachine: Using SSH client type: native
	I1025 21:41:37.798369  406952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1025 21:41:37.798383  406952 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-624750 && echo "addons-624750" | sudo tee /etc/hostname
	I1025 21:41:37.798968  406952 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54752->127.0.0.1:33103: read: connection reset by peer
	I1025 21:41:40.953432  406952 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-624750
	
	I1025 21:41:40.953516  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:41:40.974929  406952 main.go:141] libmachine: Using SSH client type: native
	I1025 21:41:40.975340  406952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1025 21:41:40.975364  406952 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-624750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-624750/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-624750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 21:41:41.114307  406952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:41:41.114335  406952 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17488-401064/.minikube CaCertPath:/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17488-401064/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17488-401064/.minikube}
	I1025 21:41:41.114367  406952 ubuntu.go:177] setting up certificates
	I1025 21:41:41.114376  406952 provision.go:83] configureAuth start
	I1025 21:41:41.114443  406952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-624750
	I1025 21:41:41.133025  406952 provision.go:138] copyHostCerts
	I1025 21:41:41.133221  406952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17488-401064/.minikube/ca.pem (1082 bytes)
	I1025 21:41:41.133362  406952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17488-401064/.minikube/cert.pem (1123 bytes)
	I1025 21:41:41.133440  406952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17488-401064/.minikube/key.pem (1675 bytes)
	I1025 21:41:41.133495  406952 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17488-401064/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca-key.pem org=jenkins.addons-624750 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-624750]
	I1025 21:41:41.781768  406952 provision.go:172] copyRemoteCerts
	I1025 21:41:41.781835  406952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 21:41:41.781877  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:41:41.800465  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:41:41.899909  406952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 21:41:41.928015  406952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1025 21:41:41.956407  406952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 21:41:41.984421  406952 provision.go:86] duration metric: configureAuth took 870.026507ms
	I1025 21:41:41.984447  406952 ubuntu.go:193] setting minikube options for container-runtime
	I1025 21:41:41.984629  406952 config.go:182] Loaded profile config "addons-624750": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1025 21:41:41.984643  406952 machine.go:91] provisioned docker machine in 4.215668557s
	I1025 21:41:41.984650  406952 client.go:171] LocalClient.Create took 13.930655965s
	I1025 21:41:41.984667  406952 start.go:167] duration metric: libmachine.API.Create for "addons-624750" took 13.930718306s
	I1025 21:41:41.984675  406952 start.go:300] post-start starting for "addons-624750" (driver="docker")
	I1025 21:41:41.984683  406952 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 21:41:41.984747  406952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 21:41:41.984790  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:41:42.003379  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:41:42.110053  406952 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 21:41:42.115259  406952 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 21:41:42.115299  406952 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 21:41:42.115312  406952 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 21:41:42.115320  406952 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 21:41:42.115351  406952 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-401064/.minikube/addons for local assets ...
	I1025 21:41:42.115447  406952 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-401064/.minikube/files for local assets ...
	I1025 21:41:42.115482  406952 start.go:303] post-start completed in 130.801729ms
	I1025 21:41:42.115866  406952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-624750
	I1025 21:41:42.137834  406952 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/config.json ...
	I1025 21:41:42.138194  406952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:41:42.138269  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:41:42.159494  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:41:42.264608  406952 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:41:42.270981  406952 start.go:128] duration metric: createHost completed in 14.219343309s
	I1025 21:41:42.271006  406952 start.go:83] releasing machines lock for "addons-624750", held for 14.219497874s
	I1025 21:41:42.271082  406952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-624750
	I1025 21:41:42.290540  406952 ssh_runner.go:195] Run: cat /version.json
	I1025 21:41:42.290598  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:41:42.290628  406952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 21:41:42.290691  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:41:42.313424  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:41:42.327182  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:41:42.546893  406952 ssh_runner.go:195] Run: systemctl --version
	I1025 21:41:42.552419  406952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 21:41:42.558100  406952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 21:41:42.586802  406952 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 21:41:42.586917  406952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:41:42.620409  406952 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1025 21:41:42.620434  406952 start.go:472] detecting cgroup driver to use...
	I1025 21:41:42.620467  406952 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 21:41:42.620518  406952 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 21:41:42.634911  406952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 21:41:42.648834  406952 docker.go:198] disabling cri-docker service (if available) ...
	I1025 21:41:42.648946  406952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 21:41:42.664833  406952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 21:41:42.681173  406952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 21:41:42.774983  406952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 21:41:42.875834  406952 docker.go:214] disabling docker service ...
	I1025 21:41:42.875940  406952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 21:41:42.898015  406952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 21:41:42.911953  406952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 21:41:43.020135  406952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 21:41:43.120330  406952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 21:41:43.133946  406952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 21:41:43.154214  406952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 21:41:43.166833  406952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 21:41:43.179290  406952 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 21:41:43.179389  406952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 21:41:43.191700  406952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 21:41:43.204095  406952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 21:41:43.216472  406952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 21:41:43.229202  406952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 21:41:43.241201  406952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 21:41:43.253441  406952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 21:41:43.263857  406952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 21:41:43.274143  406952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:41:43.359779  406952 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 21:41:43.503552  406952 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1025 21:41:43.503638  406952 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1025 21:41:43.508719  406952 start.go:540] Will wait 60s for crictl version
	I1025 21:41:43.508779  406952 ssh_runner.go:195] Run: which crictl
	I1025 21:41:43.513467  406952 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 21:41:43.556180  406952 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1025 21:41:43.556256  406952 ssh_runner.go:195] Run: containerd --version
	I1025 21:41:43.585154  406952 ssh_runner.go:195] Run: containerd --version
	I1025 21:41:43.616320  406952 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.6.24 ...
	I1025 21:41:43.618496  406952 cli_runner.go:164] Run: docker network inspect addons-624750 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:41:43.636642  406952 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 21:41:43.641491  406952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:41:43.655404  406952 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1025 21:41:43.655473  406952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:41:43.697520  406952 containerd.go:604] all images are preloaded for containerd runtime.
	I1025 21:41:43.697544  406952 containerd.go:518] Images already preloaded, skipping extraction
	I1025 21:41:43.697614  406952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:41:43.739705  406952 containerd.go:604] all images are preloaded for containerd runtime.
	I1025 21:41:43.739729  406952 cache_images.go:84] Images are preloaded, skipping loading
	I1025 21:41:43.739791  406952 ssh_runner.go:195] Run: sudo crictl info
	I1025 21:41:43.780179  406952 cni.go:84] Creating CNI manager for ""
	I1025 21:41:43.780209  406952 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1025 21:41:43.780239  406952 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 21:41:43.780265  406952 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-624750 NodeName:addons-624750 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 21:41:43.780396  406952 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-624750"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 21:41:43.780467  406952 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-624750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-624750 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 21:41:43.780539  406952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 21:41:43.791248  406952 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 21:41:43.791320  406952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 21:41:43.801824  406952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I1025 21:41:43.823532  406952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 21:41:43.844722  406952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1025 21:41:43.865754  406952 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 21:41:43.869904  406952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:41:43.882838  406952 certs.go:56] Setting up /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750 for IP: 192.168.49.2
	I1025 21:41:43.882870  406952 certs.go:190] acquiring lock for shared ca certs: {Name:mkce8239dfcf921c4b21f688c78784f182dcce0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:41:43.883044  406952 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17488-401064/.minikube/ca.key
	I1025 21:41:44.253303  406952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-401064/.minikube/ca.crt ...
	I1025 21:41:44.253333  406952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/ca.crt: {Name:mk8f363dac33530758269de4a38a502032ba03bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:41:44.253558  406952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-401064/.minikube/ca.key ...
	I1025 21:41:44.253579  406952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/ca.key: {Name:mkc20368aeaed272892541df4ed2ceaf85f92e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:41:44.253676  406952 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.key
	I1025 21:41:44.550711  406952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.crt ...
	I1025 21:41:44.550738  406952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.crt: {Name:mkb3fc7e2afe4e383ba73c84777f378296ead3ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:41:44.551296  406952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.key ...
	I1025 21:41:44.551313  406952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.key: {Name:mkc006b11cbe35a1ad11747281eb68098ddd15f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:41:44.551438  406952 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.key
	I1025 21:41:44.551455  406952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt with IP's: []
	I1025 21:41:45.313271  406952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt ...
	I1025 21:41:45.313310  406952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: {Name:mkfd7c7c62615005f7e72a08497bb6d02bcdfd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:41:45.313984  406952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.key ...
	I1025 21:41:45.314015  406952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.key: {Name:mk6da2ffa849931c6f30f9ed49b2a03e8ffed916 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:41:45.314153  406952 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/apiserver.key.dd3b5fb2
	I1025 21:41:45.314177  406952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 21:41:46.137381  406952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/apiserver.crt.dd3b5fb2 ...
	I1025 21:41:46.137413  406952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/apiserver.crt.dd3b5fb2: {Name:mk2df75f109520dc82ba6dfc0fdd6017c7927fd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:41:46.137606  406952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/apiserver.key.dd3b5fb2 ...
	I1025 21:41:46.137619  406952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/apiserver.key.dd3b5fb2: {Name:mk2202d20720cd955f971633abe336a29cd59e10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:41:46.137706  406952 certs.go:337] copying /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/apiserver.crt
	I1025 21:41:46.137780  406952 certs.go:341] copying /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/apiserver.key
	I1025 21:41:46.137832  406952 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/proxy-client.key
	I1025 21:41:46.137856  406952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/proxy-client.crt with IP's: []
	I1025 21:41:46.287932  406952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/proxy-client.crt ...
	I1025 21:41:46.287961  406952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/proxy-client.crt: {Name:mk11240f73ea95223ed494cc2485831c6e461c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:41:46.288142  406952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/proxy-client.key ...
	I1025 21:41:46.288156  406952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/proxy-client.key: {Name:mk9a5b0cae1188c794690e8474bc73b79c6f444c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:41:46.288346  406952 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 21:41:46.288389  406952 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem (1082 bytes)
	I1025 21:41:46.288421  406952 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/cert.pem (1123 bytes)
	I1025 21:41:46.288449  406952 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/key.pem (1675 bytes)
	I1025 21:41:46.289043  406952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 21:41:46.317232  406952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 21:41:46.346306  406952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 21:41:46.374906  406952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 21:41:46.403794  406952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 21:41:46.431581  406952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 21:41:46.460363  406952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 21:41:46.487694  406952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 21:41:46.515194  406952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 21:41:46.543884  406952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 21:41:46.564577  406952 ssh_runner.go:195] Run: openssl version
	I1025 21:41:46.571440  406952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 21:41:46.583051  406952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:41:46.587776  406952 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:41:46.587852  406952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:41:46.596522  406952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 21:41:46.608247  406952 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 21:41:46.612620  406952 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 21:41:46.612688  406952 kubeadm.go:404] StartCluster: {Name:addons-624750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-624750 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:41:46.612776  406952 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1025 21:41:46.612831  406952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 21:41:46.654061  406952 cri.go:89] found id: ""
	I1025 21:41:46.654174  406952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 21:41:46.664563  406952 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 21:41:46.675049  406952 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 21:41:46.675133  406952 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 21:41:46.685606  406952 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 21:41:46.685664  406952 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 21:41:46.741143  406952 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1025 21:41:46.741405  406952 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 21:41:46.790402  406952 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1025 21:41:46.790473  406952 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-aws
	I1025 21:41:46.790509  406952 kubeadm.go:322] OS: Linux
	I1025 21:41:46.790560  406952 kubeadm.go:322] CGROUPS_CPU: enabled
	I1025 21:41:46.790609  406952 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1025 21:41:46.790657  406952 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1025 21:41:46.790706  406952 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1025 21:41:46.790753  406952 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1025 21:41:46.790803  406952 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1025 21:41:46.790849  406952 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1025 21:41:46.790897  406952 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1025 21:41:46.790943  406952 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1025 21:41:46.870989  406952 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 21:41:46.871122  406952 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 21:41:46.871289  406952 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 21:41:47.121944  406952 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 21:41:47.123803  406952 out.go:204]   - Generating certificates and keys ...
	I1025 21:41:47.123916  406952 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 21:41:47.123985  406952 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 21:41:47.689228  406952 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 21:41:48.279735  406952 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 21:41:48.692832  406952 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 21:41:48.970658  406952 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 21:41:49.405686  406952 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 21:41:49.406043  406952 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-624750 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 21:41:49.686467  406952 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 21:41:49.686782  406952 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-624750 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 21:41:50.474506  406952 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 21:41:51.087789  406952 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 21:41:51.298912  406952 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 21:41:51.299259  406952 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 21:41:51.696751  406952 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 21:41:52.149549  406952 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 21:41:52.295846  406952 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 21:41:53.039122  406952 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 21:41:53.039860  406952 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 21:41:53.042425  406952 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 21:41:53.044395  406952 out.go:204]   - Booting up control plane ...
	I1025 21:41:53.044527  406952 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 21:41:53.044601  406952 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 21:41:53.045287  406952 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 21:41:53.060551  406952 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 21:41:53.060643  406952 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 21:41:53.060680  406952 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 21:41:53.160269  406952 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 21:41:59.662737  406952 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502794 seconds
	I1025 21:41:59.662855  406952 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 21:41:59.675889  406952 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 21:42:00.227350  406952 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 21:42:00.227802  406952 kubeadm.go:322] [mark-control-plane] Marking the node addons-624750 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 21:42:00.740321  406952 kubeadm.go:322] [bootstrap-token] Using token: r6m72n.hlg4p9e99363tzd2
	I1025 21:42:00.742311  406952 out.go:204]   - Configuring RBAC rules ...
	I1025 21:42:00.742433  406952 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 21:42:00.752003  406952 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 21:42:00.762594  406952 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 21:42:00.766274  406952 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 21:42:00.771864  406952 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 21:42:00.775622  406952 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 21:42:00.789513  406952 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 21:42:01.032945  406952 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1025 21:42:01.157358  406952 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1025 21:42:01.158451  406952 kubeadm.go:322] 
	I1025 21:42:01.158518  406952 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1025 21:42:01.158525  406952 kubeadm.go:322] 
	I1025 21:42:01.158596  406952 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1025 21:42:01.158601  406952 kubeadm.go:322] 
	I1025 21:42:01.158625  406952 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1025 21:42:01.158681  406952 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 21:42:01.158729  406952 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 21:42:01.158734  406952 kubeadm.go:322] 
	I1025 21:42:01.158785  406952 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1025 21:42:01.158793  406952 kubeadm.go:322] 
	I1025 21:42:01.158838  406952 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 21:42:01.158843  406952 kubeadm.go:322] 
	I1025 21:42:01.158901  406952 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1025 21:42:01.158973  406952 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 21:42:01.159037  406952 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 21:42:01.159042  406952 kubeadm.go:322] 
	I1025 21:42:01.159120  406952 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 21:42:01.159192  406952 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1025 21:42:01.159197  406952 kubeadm.go:322] 
	I1025 21:42:01.159275  406952 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token r6m72n.hlg4p9e99363tzd2 \
	I1025 21:42:01.159372  406952 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8fc893b1bfb9893856fcf0c2057305a384d09e522e58c2d24ef7688104c1c0c8 \
	I1025 21:42:01.159392  406952 kubeadm.go:322] 	--control-plane 
	I1025 21:42:01.159397  406952 kubeadm.go:322] 
	I1025 21:42:01.159476  406952 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1025 21:42:01.159482  406952 kubeadm.go:322] 
	I1025 21:42:01.159871  406952 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token r6m72n.hlg4p9e99363tzd2 \
	I1025 21:42:01.159975  406952 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8fc893b1bfb9893856fcf0c2057305a384d09e522e58c2d24ef7688104c1c0c8 
	I1025 21:42:01.164056  406952 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-aws\n", err: exit status 1
	I1025 21:42:01.164281  406952 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 21:42:01.164325  406952 cni.go:84] Creating CNI manager for ""
	I1025 21:42:01.164350  406952 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1025 21:42:01.167917  406952 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1025 21:42:01.170332  406952 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 21:42:01.175623  406952 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1025 21:42:01.175641  406952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1025 21:42:01.211209  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 21:42:02.182288  406952 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 21:42:02.182419  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:02.182486  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc minikube.k8s.io/name=addons-624750 minikube.k8s.io/updated_at=2023_10_25T21_42_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:02.429980  406952 ops.go:34] apiserver oom_adj: -16
	I1025 21:42:02.430075  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:02.537309  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:03.129266  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:03.629649  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:04.129423  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:04.629421  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:05.129213  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:05.629458  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:06.129426  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:06.629425  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:07.129102  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:07.629149  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:08.128839  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:08.628763  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:09.128706  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:09.628898  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:10.129265  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:10.628628  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:11.128685  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:11.628728  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:12.129154  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:12.628805  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:13.128736  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:13.628729  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:14.129403  406952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:42:14.266798  406952 kubeadm.go:1081] duration metric: took 12.084426207s to wait for elevateKubeSystemPrivileges.
	I1025 21:42:14.266824  406952 kubeadm.go:406] StartCluster complete in 27.654156699s
	I1025 21:42:14.266843  406952 settings.go:142] acquiring lock: {Name:mk9df4aad1a9be3e880e7cbb06d6b12a9835859c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:42:14.266952  406952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17488-401064/kubeconfig
	I1025 21:42:14.267352  406952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/kubeconfig: {Name:mk815098196b1e4c9adc580a5ae817d2d2e4d151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:42:14.268070  406952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 21:42:14.268363  406952 config.go:182] Loaded profile config "addons-624750": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1025 21:42:14.268460  406952 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1025 21:42:14.268553  406952 addons.go:69] Setting volumesnapshots=true in profile "addons-624750"
	I1025 21:42:14.268568  406952 addons.go:231] Setting addon volumesnapshots=true in "addons-624750"
	I1025 21:42:14.268604  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:14.269118  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.270760  406952 addons.go:69] Setting ingress-dns=true in profile "addons-624750"
	I1025 21:42:14.270788  406952 addons.go:231] Setting addon ingress-dns=true in "addons-624750"
	I1025 21:42:14.270846  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:14.271330  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.271762  406952 addons.go:69] Setting cloud-spanner=true in profile "addons-624750"
	I1025 21:42:14.271785  406952 addons.go:231] Setting addon cloud-spanner=true in "addons-624750"
	I1025 21:42:14.271833  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:14.272216  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.272328  406952 addons.go:69] Setting inspektor-gadget=true in profile "addons-624750"
	I1025 21:42:14.272344  406952 addons.go:231] Setting addon inspektor-gadget=true in "addons-624750"
	I1025 21:42:14.272372  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:14.272736  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.275092  406952 addons.go:69] Setting metrics-server=true in profile "addons-624750"
	I1025 21:42:14.275124  406952 addons.go:231] Setting addon metrics-server=true in "addons-624750"
	I1025 21:42:14.275167  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:14.275605  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.280143  406952 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-624750"
	I1025 21:42:14.280171  406952 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-624750"
	I1025 21:42:14.280220  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:14.280628  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.292350  406952 addons.go:69] Setting registry=true in profile "addons-624750"
	I1025 21:42:14.292385  406952 addons.go:231] Setting addon registry=true in "addons-624750"
	I1025 21:42:14.292434  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:14.292862  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.303609  406952 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-624750"
	I1025 21:42:14.303702  406952 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-624750"
	I1025 21:42:14.303757  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:14.304199  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.310629  406952 addons.go:69] Setting storage-provisioner=true in profile "addons-624750"
	I1025 21:42:14.310670  406952 addons.go:231] Setting addon storage-provisioner=true in "addons-624750"
	I1025 21:42:14.310720  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:14.311158  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.320265  406952 addons.go:69] Setting default-storageclass=true in profile "addons-624750"
	I1025 21:42:14.320303  406952 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-624750"
	I1025 21:42:14.320655  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.333133  406952 addons.go:69] Setting gcp-auth=true in profile "addons-624750"
	I1025 21:42:14.333175  406952 mustload.go:65] Loading cluster: addons-624750
	I1025 21:42:14.335629  406952 config.go:182] Loaded profile config "addons-624750": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1025 21:42:14.335943  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.337379  406952 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-624750"
	I1025 21:42:14.337417  406952 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-624750"
	I1025 21:42:14.337746  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.355530  406952 addons.go:69] Setting ingress=true in profile "addons-624750"
	I1025 21:42:14.355563  406952 addons.go:231] Setting addon ingress=true in "addons-624750"
	I1025 21:42:14.355618  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:14.356046  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.529707  406952 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1025 21:42:14.539897  406952 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1025 21:42:14.539963  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 21:42:14.540056  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:14.558579  406952 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 21:42:14.560923  406952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 21:42:14.560949  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 21:42:14.561031  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:14.566980  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:14.567818  406952 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-624750"
	I1025 21:42:14.570423  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:14.570944  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.588991  406952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1025 21:42:14.583979  406952 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1025 21:42:14.583986  406952 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1025 21:42:14.583991  406952 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:42:14.583995  406952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 21:42:14.584001  406952 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1025 21:42:14.584005  406952 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1025 21:42:14.595144  406952 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1025 21:42:14.597178  406952 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1025 21:42:14.597198  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1025 21:42:14.597270  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:14.597438  406952 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 21:42:14.597445  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 21:42:14.597478  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:14.608625  406952 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 21:42:14.608729  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1025 21:42:14.608807  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:14.614480  406952 out.go:177]   - Using image docker.io/registry:2.8.3
	I1025 21:42:14.647294  406952 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 21:42:14.647315  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1025 21:42:14.647380  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:14.611390  406952 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:42:14.651244  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 21:42:14.651348  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:14.667914  406952 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-624750" context rescaled to 1 replicas
	I1025 21:42:14.667951  406952 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1025 21:42:14.670544  406952 out.go:177] * Verifying Kubernetes components...
	I1025 21:42:14.611402  406952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1025 21:42:14.612547  406952 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 21:42:14.674106  406952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 21:42:14.675778  406952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 21:42:14.679425  406952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 21:42:14.681216  406952 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 21:42:14.683099  406952 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 21:42:14.695693  406952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 21:42:14.726245  406952 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 21:42:14.695888  406952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:42:14.695925  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 21:42:14.695610  406952 addons.go:231] Setting addon default-storageclass=true in "addons-624750"
	I1025 21:42:14.696168  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:14.701972  406952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 21:42:14.732662  406952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 21:42:14.735431  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 21:42:14.735543  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:14.735758  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:14.736298  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:14.772134  406952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.3
	I1025 21:42:14.774343  406952 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 21:42:14.774406  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1025 21:42:14.774503  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:14.783825  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:14.793523  406952 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 21:42:14.785970  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:14.800597  406952 out.go:177]   - Using image docker.io/busybox:stable
	I1025 21:42:14.802815  406952 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 21:42:14.802834  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 21:42:14.802902  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:14.856333  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:14.857235  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:14.863041  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:14.884819  406952 node_ready.go:35] waiting up to 6m0s for node "addons-624750" to be "Ready" ...
	I1025 21:42:14.906606  406952 node_ready.go:49] node "addons-624750" has status "Ready":"True"
	I1025 21:42:14.906680  406952 node_ready.go:38] duration metric: took 20.86528ms waiting for node "addons-624750" to be "Ready" ...
	I1025 21:42:14.906706  406952 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:42:14.935296  406952 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace to be "Ready" ...
	I1025 21:42:14.947399  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:14.956425  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:14.988596  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:15.029045  406952 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 21:42:15.029088  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 21:42:15.029163  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:15.060584  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:15.061810  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:15.066144  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:15.105654  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:15.351335  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 21:42:15.356104  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 21:42:15.407388  406952 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 21:42:15.407459  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 21:42:15.492882  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:42:15.542940  406952 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 21:42:15.543015  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 21:42:15.605516  406952 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 21:42:15.605590  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 21:42:15.664068  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 21:42:15.687742  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 21:42:15.737394  406952 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 21:42:15.737416  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 21:42:15.823547  406952 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1025 21:42:15.823609  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1025 21:42:15.829716  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 21:42:15.841878  406952 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 21:42:15.841945  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 21:42:15.921609  406952 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 21:42:15.921671  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 21:42:15.933625  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 21:42:15.946800  406952 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 21:42:15.946864  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 21:42:16.046094  406952 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1025 21:42:16.046117  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1025 21:42:16.060528  406952 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 21:42:16.060550  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 21:42:16.155658  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 21:42:16.202362  406952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 21:42:16.202387  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 21:42:16.205340  406952 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 21:42:16.205405  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 21:42:16.211551  406952 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 21:42:16.211621  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 21:42:16.300699  406952 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1025 21:42:16.300724  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1025 21:42:16.404472  406952 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:42:16.404494  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 21:42:16.423900  406952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 21:42:16.423926  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 21:42:16.530485  406952 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1025 21:42:16.530513  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1025 21:42:16.536396  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 21:42:16.669423  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:42:16.682280  406952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 21:42:16.682307  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 21:42:16.733614  406952 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1025 21:42:16.733646  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1025 21:42:16.922790  406952 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 21:42:16.922859  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 21:42:17.010835  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:17.022937  406952 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 21:42:17.023008  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1025 21:42:17.199361  406952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 21:42:17.199431  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 21:42:17.247931  406952 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.454512091s)
	I1025 21:42:17.248004  406952 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 21:42:17.375672  406952 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1025 21:42:17.375692  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1025 21:42:17.539026  406952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 21:42:17.539097  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 21:42:17.654051  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1025 21:42:17.796465  406952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 21:42:17.796536  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 21:42:17.912254  406952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 21:42:17.912326  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 21:42:18.080107  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.723915091s)
	I1025 21:42:18.080560  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.729142838s)
	I1025 21:42:18.190454  406952 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 21:42:18.190484  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 21:42:18.507029  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 21:42:19.016404  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:19.145630  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.652653273s)
	I1025 21:42:19.145705  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.481617013s)
	I1025 21:42:20.432791  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.745011703s)
	I1025 21:42:21.395426  406952 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 21:42:21.395582  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:21.426684  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:21.502987  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:21.889102  406952 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 21:42:21.994070  406952 addons.go:231] Setting addon gcp-auth=true in "addons-624750"
	I1025 21:42:21.994173  406952 host.go:66] Checking if "addons-624750" exists ...
	I1025 21:42:21.994705  406952 cli_runner.go:164] Run: docker container inspect addons-624750 --format={{.State.Status}}
	I1025 21:42:22.020029  406952 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 21:42:22.020083  406952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-624750
	I1025 21:42:22.053297  406952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/addons-624750/id_rsa Username:docker}
	I1025 21:42:22.106762  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.276969936s)
	I1025 21:42:22.106791  406952 addons.go:467] Verifying addon ingress=true in "addons-624750"
	I1025 21:42:22.108866  406952 out.go:177] * Verifying ingress addon...
	I1025 21:42:22.106993  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.173294318s)
	I1025 21:42:22.107088  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.951399474s)
	I1025 21:42:22.107116  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.570695133s)
	I1025 21:42:22.107189  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.437738607s)
	I1025 21:42:22.107240  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.453110948s)
	I1025 21:42:22.111570  406952 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 21:42:22.111849  406952 addons.go:467] Verifying addon metrics-server=true in "addons-624750"
	I1025 21:42:22.111864  406952 addons.go:467] Verifying addon registry=true in "addons-624750"
	I1025 21:42:22.113943  406952 out.go:177] * Verifying registry addon...
	W1025 21:42:22.112093  406952 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 21:42:22.115595  406952 retry.go:31] will retry after 361.303661ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 21:42:22.116367  406952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 21:42:22.117589  406952 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 21:42:22.118071  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:22.127062  406952 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 21:42:22.127133  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:22.130531  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:22.138076  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:22.477828  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:42:22.635744  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:22.643498  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:23.141502  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:23.146049  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:23.507046  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:23.639978  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:23.654381  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:23.740284  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.233199231s)
	I1025 21:42:23.740315  406952 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-624750"
	I1025 21:42:23.742628  406952 out.go:177] * Verifying csi-hostpath-driver addon...
	I1025 21:42:23.740549  406952 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.720498637s)
	I1025 21:42:23.748730  406952 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1025 21:42:23.746064  406952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 21:42:23.752276  406952 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1025 21:42:23.754190  406952 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 21:42:23.754208  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 21:42:23.767405  406952 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 21:42:23.767483  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:23.772952  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:23.849183  406952 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 21:42:23.849253  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 21:42:23.922884  406952 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 21:42:23.922956  406952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1025 21:42:23.992556  406952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 21:42:24.135238  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:24.145781  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:24.285937  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:24.545889  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.068002912s)
	I1025 21:42:24.635310  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:24.643272  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:24.779668  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:24.992847  406952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.000214756s)
	I1025 21:42:24.994482  406952 addons.go:467] Verifying addon gcp-auth=true in "addons-624750"
	I1025 21:42:24.996480  406952 out.go:177] * Verifying gcp-auth addon...
	I1025 21:42:24.999769  406952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 21:42:25.028921  406952 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 21:42:25.028998  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:25.038408  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:25.137460  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:25.148969  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:25.279235  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:25.546820  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:25.635899  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:25.643671  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:25.780531  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:26.002558  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:26.043159  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:26.135296  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:26.143094  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:26.279156  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:26.542948  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:26.636032  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:26.644030  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:26.779339  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:27.042567  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:27.142471  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:27.146919  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:27.278666  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:27.542858  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:27.636266  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:27.644087  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:27.778958  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:28.024908  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:28.050247  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:28.135941  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:28.143972  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:28.283355  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:28.543353  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:28.636295  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:28.643765  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:28.780089  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:29.042811  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:29.135752  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:29.143670  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:29.279671  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:29.542273  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:29.635889  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:29.643577  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:29.779929  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:30.051900  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:30.136218  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:30.145903  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:30.279259  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:30.503011  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:30.542924  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:30.636153  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:30.643478  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:30.780445  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:31.042413  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:31.141302  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:31.147946  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:31.287416  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:31.542572  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:31.635560  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:31.644567  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:31.779166  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:32.043011  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:32.135262  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:32.147311  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:32.279010  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:32.542286  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:32.635681  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:32.643268  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:32.778717  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:33.002141  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:33.042887  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:33.136577  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:33.144820  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:33.278305  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:33.542396  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:33.635870  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:33.643860  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:33.779259  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:34.042687  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:34.139221  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:34.143974  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:34.278549  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:34.542738  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:34.635298  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:34.642921  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:34.778907  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:35.002255  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:35.042717  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:35.140976  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:35.144727  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:35.279197  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:35.541812  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:35.635171  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:35.642633  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:35.779133  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:36.042498  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:36.135011  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:36.145791  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:36.279669  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:36.542668  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:36.635652  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:36.642966  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:36.778709  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:37.002358  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:37.042233  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:37.141544  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:37.147503  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:37.279078  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:37.542219  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:37.635260  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:37.642709  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:37.778711  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:38.042658  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:38.135858  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:38.144073  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:38.278496  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:38.543197  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:38.635339  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:38.643174  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:38.779112  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:39.042857  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:39.138487  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:39.144534  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:39.278694  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:39.502353  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:39.542987  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:39.635349  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:39.643123  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:39.779009  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:40.043313  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:40.136160  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:40.143117  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:40.278748  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:40.542371  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:40.635443  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:40.643149  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:40.779038  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:41.042923  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:41.137048  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:41.142854  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:41.278963  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:41.502622  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:41.543128  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:41.635335  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:41.642672  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:41.779292  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:42.042684  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:42.139512  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:42.144959  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:42.280247  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:42.541943  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:42.634982  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:42.643585  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:42.778556  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:43.042276  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:43.137213  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:43.142640  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:43.279842  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:43.505345  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:43.542817  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:43.635586  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:43.642907  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:43.778619  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:44.043097  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:44.135301  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:44.143476  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:44.279737  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:44.542466  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:44.635024  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:44.642528  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:44.779285  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:45.059631  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:45.143976  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:45.150909  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:45.282608  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:45.542557  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:45.635769  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:45.643492  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:45.779228  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:46.002508  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:46.043318  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:46.137161  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:46.150476  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:46.279578  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:46.542176  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:46.635229  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:46.642726  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:46.779620  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:47.042117  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:47.136349  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:47.143373  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:47.279705  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:47.542989  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:47.635290  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:47.643412  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:47.780054  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:48.003527  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:48.043711  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:48.136036  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:48.143766  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:48.279558  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:48.542338  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:48.635297  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:48.642930  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:48.778836  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:49.042426  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:49.139722  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:49.143494  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:49.280216  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:49.542677  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:49.636414  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:49.643676  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:49.778893  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:50.042955  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:50.135917  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:50.142887  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:50.279524  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:50.508684  406952 pod_ready.go:102] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"False"
	I1025 21:42:50.543508  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:50.638811  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:50.644312  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:50.778959  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:51.045113  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:51.136250  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:51.146604  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:51.282165  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:51.519042  406952 pod_ready.go:92] pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace has status "Ready":"True"
	I1025 21:42:51.519115  406952 pod_ready.go:81] duration metric: took 36.554140208s waiting for pod "coredns-5dd5756b68-n7t69" in "kube-system" namespace to be "Ready" ...
	I1025 21:42:51.519141  406952 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ng46r" in "kube-system" namespace to be "Ready" ...
	I1025 21:42:51.530843  406952 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ng46r" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ng46r" not found
	I1025 21:42:51.530917  406952 pod_ready.go:81] duration metric: took 11.755017ms waiting for pod "coredns-5dd5756b68-ng46r" in "kube-system" namespace to be "Ready" ...
	E1025 21:42:51.530943  406952 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ng46r" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ng46r" not found
	I1025 21:42:51.530963  406952 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-624750" in "kube-system" namespace to be "Ready" ...
	I1025 21:42:51.539339  406952 pod_ready.go:92] pod "etcd-addons-624750" in "kube-system" namespace has status "Ready":"True"
	I1025 21:42:51.539405  406952 pod_ready.go:81] duration metric: took 8.40829ms waiting for pod "etcd-addons-624750" in "kube-system" namespace to be "Ready" ...
	I1025 21:42:51.539433  406952 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-624750" in "kube-system" namespace to be "Ready" ...
	I1025 21:42:51.545372  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:51.548295  406952 pod_ready.go:92] pod "kube-apiserver-addons-624750" in "kube-system" namespace has status "Ready":"True"
	I1025 21:42:51.548372  406952 pod_ready.go:81] duration metric: took 8.916792ms waiting for pod "kube-apiserver-addons-624750" in "kube-system" namespace to be "Ready" ...
	I1025 21:42:51.548399  406952 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-624750" in "kube-system" namespace to be "Ready" ...
	I1025 21:42:51.558849  406952 pod_ready.go:92] pod "kube-controller-manager-addons-624750" in "kube-system" namespace has status "Ready":"True"
	I1025 21:42:51.558908  406952 pod_ready.go:81] duration metric: took 10.48864ms waiting for pod "kube-controller-manager-addons-624750" in "kube-system" namespace to be "Ready" ...
	I1025 21:42:51.558949  406952 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wwszj" in "kube-system" namespace to be "Ready" ...
	I1025 21:42:51.636129  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:51.643528  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:51.700346  406952 pod_ready.go:92] pod "kube-proxy-wwszj" in "kube-system" namespace has status "Ready":"True"
	I1025 21:42:51.700370  406952 pod_ready.go:81] duration metric: took 141.398272ms waiting for pod "kube-proxy-wwszj" in "kube-system" namespace to be "Ready" ...
	I1025 21:42:51.700383  406952 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-624750" in "kube-system" namespace to be "Ready" ...
	I1025 21:42:51.779610  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:52.042881  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:52.100732  406952 pod_ready.go:92] pod "kube-scheduler-addons-624750" in "kube-system" namespace has status "Ready":"True"
	I1025 21:42:52.100756  406952 pod_ready.go:81] duration metric: took 400.365602ms waiting for pod "kube-scheduler-addons-624750" in "kube-system" namespace to be "Ready" ...
	I1025 21:42:52.100765  406952 pod_ready.go:38] duration metric: took 37.194036038s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:42:52.100782  406952 api_server.go:52] waiting for apiserver process to appear ...
	I1025 21:42:52.100847  406952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:42:52.116173  406952 api_server.go:72] duration metric: took 37.448194639s to wait for apiserver process to appear ...
	I1025 21:42:52.116198  406952 api_server.go:88] waiting for apiserver healthz status ...
	I1025 21:42:52.116216  406952 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 21:42:52.125542  406952 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 21:42:52.126901  406952 api_server.go:141] control plane version: v1.28.3
	I1025 21:42:52.126929  406952 api_server.go:131] duration metric: took 10.723327ms to wait for apiserver health ...
	I1025 21:42:52.126938  406952 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 21:42:52.134970  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:52.143826  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:52.279185  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:52.311340  406952 system_pods.go:59] 18 kube-system pods found
	I1025 21:42:52.311377  406952 system_pods.go:61] "coredns-5dd5756b68-n7t69" [4889e9a3-32ae-499d-919a-4396945a528e] Running
	I1025 21:42:52.311389  406952 system_pods.go:61] "csi-hostpath-attacher-0" [f55b8c37-b544-475c-a55e-f5d208d3b6cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 21:42:52.311402  406952 system_pods.go:61] "csi-hostpath-resizer-0" [70936a6f-17c0-4950-9e50-41b9660b4439] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 21:42:52.311415  406952 system_pods.go:61] "csi-hostpathplugin-xxns4" [ffca6efb-e351-4852-b58a-4cef293bd8e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 21:42:52.311425  406952 system_pods.go:61] "etcd-addons-624750" [73b281c6-3026-43bf-b1a4-fbfe3597876b] Running
	I1025 21:42:52.311440  406952 system_pods.go:61] "kindnet-82wq4" [db8ccabd-12d8-4c0d-88db-df7c4ea30a12] Running
	I1025 21:42:52.311446  406952 system_pods.go:61] "kube-apiserver-addons-624750" [394c21d4-6d44-4d9c-b570-6d99264f5e70] Running
	I1025 21:42:52.311460  406952 system_pods.go:61] "kube-controller-manager-addons-624750" [bfe72ec3-40c8-4ec5-acd6-beec0cf21033] Running
	I1025 21:42:52.311478  406952 system_pods.go:61] "kube-ingress-dns-minikube" [0edc7015-fa81-45c8-aa1e-0d098a17dfb0] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 21:42:52.311488  406952 system_pods.go:61] "kube-proxy-wwszj" [55afbad3-f7cf-49d7-a812-4f22833a216e] Running
	I1025 21:42:52.311493  406952 system_pods.go:61] "kube-scheduler-addons-624750" [57e3a24f-0d03-4b2a-b1e1-57e77f90858a] Running
	I1025 21:42:52.311500  406952 system_pods.go:61] "metrics-server-7c66d45ddc-wvkqn" [e2f2b1ac-6a11-4ff9-9bc2-c1a2fb53e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 21:42:52.311516  406952 system_pods.go:61] "nvidia-device-plugin-daemonset-rljkw" [dff4c924-1e4d-4071-b0b1-2306f0328865] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 21:42:52.311533  406952 system_pods.go:61] "registry-22k55" [774a42c5-49ca-495e-9cf7-14e3567306e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 21:42:52.311546  406952 system_pods.go:61] "registry-proxy-th8cc" [d49467c4-7722-4c28-8665-5fcdef655317] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 21:42:52.311563  406952 system_pods.go:61] "snapshot-controller-58dbcc7b99-mnwlq" [06b4eba1-3994-4998-8654-c263a5f94727] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:42:52.311577  406952 system_pods.go:61] "snapshot-controller-58dbcc7b99-qczpg" [4dd9aa44-248e-4346-9cf7-53c3b8ceda1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:42:52.311588  406952 system_pods.go:61] "storage-provisioner" [769cafa9-3c8b-42fe-9295-280f7f496cea] Running
	I1025 21:42:52.311600  406952 system_pods.go:74] duration metric: took 184.65028ms to wait for pod list to return data ...
	I1025 21:42:52.311615  406952 default_sa.go:34] waiting for default service account to be created ...
	I1025 21:42:52.499552  406952 default_sa.go:45] found service account: "default"
	I1025 21:42:52.499582  406952 default_sa.go:55] duration metric: took 187.959535ms for default service account to be created ...
	I1025 21:42:52.499594  406952 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 21:42:52.542009  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:52.636269  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:52.643136  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:52.709154  406952 system_pods.go:86] 18 kube-system pods found
	I1025 21:42:52.709187  406952 system_pods.go:89] "coredns-5dd5756b68-n7t69" [4889e9a3-32ae-499d-919a-4396945a528e] Running
	I1025 21:42:52.709198  406952 system_pods.go:89] "csi-hostpath-attacher-0" [f55b8c37-b544-475c-a55e-f5d208d3b6cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 21:42:52.709207  406952 system_pods.go:89] "csi-hostpath-resizer-0" [70936a6f-17c0-4950-9e50-41b9660b4439] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 21:42:52.709216  406952 system_pods.go:89] "csi-hostpathplugin-xxns4" [ffca6efb-e351-4852-b58a-4cef293bd8e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 21:42:52.709224  406952 system_pods.go:89] "etcd-addons-624750" [73b281c6-3026-43bf-b1a4-fbfe3597876b] Running
	I1025 21:42:52.709231  406952 system_pods.go:89] "kindnet-82wq4" [db8ccabd-12d8-4c0d-88db-df7c4ea30a12] Running
	I1025 21:42:52.709236  406952 system_pods.go:89] "kube-apiserver-addons-624750" [394c21d4-6d44-4d9c-b570-6d99264f5e70] Running
	I1025 21:42:52.709246  406952 system_pods.go:89] "kube-controller-manager-addons-624750" [bfe72ec3-40c8-4ec5-acd6-beec0cf21033] Running
	I1025 21:42:52.709255  406952 system_pods.go:89] "kube-ingress-dns-minikube" [0edc7015-fa81-45c8-aa1e-0d098a17dfb0] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 21:42:52.709265  406952 system_pods.go:89] "kube-proxy-wwszj" [55afbad3-f7cf-49d7-a812-4f22833a216e] Running
	I1025 21:42:52.709271  406952 system_pods.go:89] "kube-scheduler-addons-624750" [57e3a24f-0d03-4b2a-b1e1-57e77f90858a] Running
	I1025 21:42:52.709278  406952 system_pods.go:89] "metrics-server-7c66d45ddc-wvkqn" [e2f2b1ac-6a11-4ff9-9bc2-c1a2fb53e0d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 21:42:52.709291  406952 system_pods.go:89] "nvidia-device-plugin-daemonset-rljkw" [dff4c924-1e4d-4071-b0b1-2306f0328865] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 21:42:52.709299  406952 system_pods.go:89] "registry-22k55" [774a42c5-49ca-495e-9cf7-14e3567306e2] Running
	I1025 21:42:52.709310  406952 system_pods.go:89] "registry-proxy-th8cc" [d49467c4-7722-4c28-8665-5fcdef655317] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 21:42:52.709318  406952 system_pods.go:89] "snapshot-controller-58dbcc7b99-mnwlq" [06b4eba1-3994-4998-8654-c263a5f94727] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:42:52.709327  406952 system_pods.go:89] "snapshot-controller-58dbcc7b99-qczpg" [4dd9aa44-248e-4346-9cf7-53c3b8ceda1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:42:52.709336  406952 system_pods.go:89] "storage-provisioner" [769cafa9-3c8b-42fe-9295-280f7f496cea] Running
	I1025 21:42:52.709342  406952 system_pods.go:126] duration metric: took 209.743659ms to wait for k8s-apps to be running ...
	I1025 21:42:52.709349  406952 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 21:42:52.709412  406952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:42:52.726408  406952 system_svc.go:56] duration metric: took 17.04924ms WaitForService to wait for kubelet.
	I1025 21:42:52.726481  406952 kubeadm.go:581] duration metric: took 38.058508657s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 21:42:52.726516  406952 node_conditions.go:102] verifying NodePressure condition ...
	I1025 21:42:52.779346  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:52.899827  406952 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 21:42:52.899862  406952 node_conditions.go:123] node cpu capacity is 2
	I1025 21:42:52.899877  406952 node_conditions.go:105] duration metric: took 173.341259ms to run NodePressure ...
	I1025 21:42:52.899910  406952 start.go:228] waiting for startup goroutines ...
	I1025 21:42:53.042245  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:53.135322  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:53.143065  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:53.279151  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:53.542967  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:53.639535  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:53.642446  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:53.778854  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:54.042572  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:54.136852  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:54.144264  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:54.279384  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:54.542067  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:54.635644  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:54.644531  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:54.778929  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:55.044033  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:55.136113  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:55.146713  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:55.280183  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:55.542042  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:55.635515  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:55.643439  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:55.784617  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:56.042723  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:56.137392  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:56.145213  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:56.286992  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:56.544119  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:56.636011  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:56.642973  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:56.778319  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:57.042266  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:57.136102  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:57.147090  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:57.279424  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:57.544194  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:57.636711  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:57.643834  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:57.780154  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:58.043676  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:58.135496  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:58.143405  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:58.279072  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:58.545773  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:58.636199  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:58.643259  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:58.780744  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:59.042504  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:59.135520  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:59.143400  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:59.278758  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:42:59.542758  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:42:59.635711  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:42:59.644334  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:42:59.779535  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:00.088506  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:00.136562  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:00.147997  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:00.280455  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:00.542031  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:00.639233  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:00.650113  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:00.778816  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:01.043504  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:01.136300  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:01.144559  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:01.279512  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:01.543317  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:01.636895  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:01.644438  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:01.779456  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:02.044038  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:02.138532  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:02.143285  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:02.279209  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:02.541869  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:02.635851  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:02.643693  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:02.779240  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:03.042271  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:03.137263  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:03.147600  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:03.279435  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:03.543130  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:03.635928  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:03.644896  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:03.778764  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:04.043023  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:04.135912  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:04.143776  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:04.278830  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:04.542285  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:04.637296  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:04.642887  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:04.778193  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:05.042717  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:05.136080  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:05.143394  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:05.280161  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:05.542875  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:05.637705  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:05.648145  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:05.778882  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:06.043161  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:06.136299  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:06.143842  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:06.278703  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:06.554197  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:06.634941  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:06.643940  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:06.780197  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:07.043153  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:07.141263  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:07.144917  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:07.278427  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:07.542490  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:07.635332  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:07.643338  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:07.779855  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:08.043386  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:08.136194  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:08.143428  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:08.280317  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:08.542293  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:08.635931  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:08.643638  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:08.779870  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:09.046528  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:09.135421  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:09.144855  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:09.280044  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:09.542749  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:09.635320  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:09.643366  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:43:09.780183  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:10.044583  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:10.135657  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:10.143732  406952 kapi.go:107] duration metric: took 48.027359996s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 21:43:10.279404  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:10.542086  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:10.634784  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:10.778282  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:11.042571  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:11.140839  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:11.279194  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:11.543624  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:11.635917  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:11.781321  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:12.044283  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:12.135957  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:12.280501  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:12.543037  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:43:12.636700  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:12.789181  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:13.042829  406952 kapi.go:107] duration metric: took 48.043055695s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 21:43:13.044874  406952 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-624750 cluster.
	I1025 21:43:13.046751  406952 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 21:43:13.048269  406952 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 21:43:13.136409  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:13.279054  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:13.635717  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:13.779026  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:14.135833  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:14.278529  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:14.635228  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:14.778927  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:15.135508  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:15.279167  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:15.637930  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:15.778178  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:16.135695  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:16.279646  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:16.639391  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:16.779199  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:17.138822  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:17.280737  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:17.636216  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:17.780106  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:18.136728  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:18.279594  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:18.637628  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:18.779189  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:19.139033  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:19.278265  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:19.635353  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:19.778666  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:20.136147  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:20.278534  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:20.636397  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:20.779064  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:21.135752  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:21.279309  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:21.641874  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:21.779121  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:22.135453  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:22.280181  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:22.635550  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:22.778984  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:23.135756  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:23.279270  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:23.634850  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:23.779780  406952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:43:24.135205  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:24.279509  406952 kapi.go:107] duration metric: took 1m0.533442558s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 21:43:24.635780  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:25.138170  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:25.635417  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:26.135236  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:26.635247  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:27.135987  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:27.635984  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:28.135722  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:28.636076  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:29.138685  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:29.635825  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:30.137703  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:30.635899  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:31.138227  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:31.635012  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:32.135514  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:32.635407  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:33.137237  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:33.636264  406952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:43:34.136141  406952 kapi.go:107] duration metric: took 1m12.024568005s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 21:43:34.138200  406952 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, storage-provisioner-rancher, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1025 21:43:34.139970  406952 addons.go:502] enable addons completed in 1m19.871504537s: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns storage-provisioner-rancher inspektor-gadget metrics-server default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1025 21:43:34.140012  406952 start.go:233] waiting for cluster config update ...
	I1025 21:43:34.140041  406952 start.go:242] writing updated cluster config ...
	I1025 21:43:34.140331  406952 ssh_runner.go:195] Run: rm -f paused
	I1025 21:43:34.237997  406952 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1025 21:43:34.240810  406952 out.go:177] * Done! kubectl is now configured to use "addons-624750" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4903687aa5731       97e050c3e21e9       8 seconds ago        Exited              hello-world-app           2                   0c6fe96bc7b94       hello-world-app-5d77478584-t4wl4
	91fc52523e36a       aae348c9fbd40       33 seconds ago       Running             nginx                     0                   ccd4579322d0b       nginx
	428f71e4a00a0       1e8576358b7fb       About a minute ago   Running             headlamp                  0                   1f0d25690e7a6       headlamp-94b766c-gf5sk
	d6c9dfa9a54cb       2a5f29343eb03       2 minutes ago        Running             gcp-auth                  0                   dd6f1e59abbd4       gcp-auth-d4c87556c-t5848
	9e0de884f82a7       af594c6a879f2       2 minutes ago        Exited              patch                     0                   0acb0fc231fd4       ingress-nginx-admission-patch-rw6v2
	a0af08b8142e8       af594c6a879f2       2 minutes ago        Exited              create                    0                   61bfc4affe761       ingress-nginx-admission-create-c68zt
	b2f7cec366ba7       97e04611ad434       2 minutes ago        Running             coredns                   0                   b04bc614e3ff9       coredns-5dd5756b68-n7t69
	636716017ad1e       ba04bb24b9575       3 minutes ago        Running             storage-provisioner       0                   6199d5c948548       storage-provisioner
	9ba7c92d752cb       a5dd5cdd6d3ef       3 minutes ago        Running             kube-proxy                0                   28ad53c4bd9dc       kube-proxy-wwszj
	07fed9e92b32e       04b4eaa3d3db8       3 minutes ago        Running             kindnet-cni               0                   051ca06e1a35a       kindnet-82wq4
	c0d6784572c9a       42a4e73724daa       3 minutes ago        Running             kube-scheduler            0                   dc0f5b1e92a6a       kube-scheduler-addons-624750
	f8224d38a2480       8276439b4f237       3 minutes ago        Running             kube-controller-manager   0                   351222c6559be       kube-controller-manager-addons-624750
	8373e4b39e248       9cdd6470f48c8       3 minutes ago        Running             etcd                      0                   5792b68b45cfd       etcd-addons-624750
	d325c49a59ba5       537e9a59ee2fd       3 minutes ago        Running             kube-apiserver            0                   c5503b602b7bb       kube-apiserver-addons-624750
	
	* 
	* ==> containerd <==
	* Oct 25 21:45:18 addons-624750 containerd[745]: time="2023-10-25T21:45:18.283358942Z" level=info msg="shim disconnected" id=4903687aa573154d69200f24ebfc27df46bccf445fd62ae8239d3f5f9c156532
	Oct 25 21:45:18 addons-624750 containerd[745]: time="2023-10-25T21:45:18.283418461Z" level=warning msg="cleaning up after shim disconnected" id=4903687aa573154d69200f24ebfc27df46bccf445fd62ae8239d3f5f9c156532 namespace=k8s.io
	Oct 25 21:45:18 addons-624750 containerd[745]: time="2023-10-25T21:45:18.283430407Z" level=info msg="cleaning up dead shim"
	Oct 25 21:45:18 addons-624750 containerd[745]: time="2023-10-25T21:45:18.294508852Z" level=warning msg="cleanup warnings time=\"2023-10-25T21:45:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11883 runtime=io.containerd.runc.v2\n"
	Oct 25 21:45:18 addons-624750 containerd[745]: time="2023-10-25T21:45:18.358880005Z" level=info msg="RemoveContainer for \"6c021779740c7b099edaea14e94213ed92fa923b6f73f8905808cc621aeba507\""
	Oct 25 21:45:18 addons-624750 containerd[745]: time="2023-10-25T21:45:18.365657533Z" level=info msg="RemoveContainer for \"6c021779740c7b099edaea14e94213ed92fa923b6f73f8905808cc621aeba507\" returns successfully"
	Oct 25 21:45:19 addons-624750 containerd[745]: time="2023-10-25T21:45:19.099918197Z" level=info msg="StopContainer for \"19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f\" with timeout 2 (s)"
	Oct 25 21:45:19 addons-624750 containerd[745]: time="2023-10-25T21:45:19.100418365Z" level=info msg="Stop container \"19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f\" with signal terminated"
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.108790892Z" level=info msg="Kill container \"19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f\""
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.194309547Z" level=info msg="shim disconnected" id=19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.194371544Z" level=warning msg="cleaning up after shim disconnected" id=19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f namespace=k8s.io
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.194383351Z" level=info msg="cleaning up dead shim"
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.204534420Z" level=warning msg="cleanup warnings time=\"2023-10-25T21:45:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11923 runtime=io.containerd.runc.v2\n"
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.207411500Z" level=info msg="StopContainer for \"19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f\" returns successfully"
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.208060072Z" level=info msg="StopPodSandbox for \"e9301c13865b8578c366edcb6a3f50f62c452ed7eae21101e701c726abf1978b\""
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.208136813Z" level=info msg="Container to stop \"19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.243652794Z" level=info msg="shim disconnected" id=e9301c13865b8578c366edcb6a3f50f62c452ed7eae21101e701c726abf1978b
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.243716990Z" level=warning msg="cleaning up after shim disconnected" id=e9301c13865b8578c366edcb6a3f50f62c452ed7eae21101e701c726abf1978b namespace=k8s.io
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.243730979Z" level=info msg="cleaning up dead shim"
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.254171315Z" level=warning msg="cleanup warnings time=\"2023-10-25T21:45:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11957 runtime=io.containerd.runc.v2\n"
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.304265827Z" level=info msg="TearDown network for sandbox \"e9301c13865b8578c366edcb6a3f50f62c452ed7eae21101e701c726abf1978b\" successfully"
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.304322836Z" level=info msg="StopPodSandbox for \"e9301c13865b8578c366edcb6a3f50f62c452ed7eae21101e701c726abf1978b\" returns successfully"
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.381802838Z" level=info msg="RemoveContainer for \"19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f\""
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.387192640Z" level=info msg="RemoveContainer for \"19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f\" returns successfully"
	Oct 25 21:45:21 addons-624750 containerd[745]: time="2023-10-25T21:45:21.387813914Z" level=error msg="ContainerStatus for \"19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f\": not found"
	
	* 
	* ==> coredns [b2f7cec366ba77b6d1eb31855b498caf129da9b6188b760a18cd88beafb340ff] <==
	* [INFO] 10.244.0.19:38591 - 38823 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000183742s
	[INFO] 10.244.0.19:40941 - 57377 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056951s
	[INFO] 10.244.0.19:40941 - 24580 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058879s
	[INFO] 10.244.0.19:40941 - 9200 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059904s
	[INFO] 10.244.0.19:40941 - 58344 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001317131s
	[INFO] 10.244.0.19:40941 - 62029 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001219236s
	[INFO] 10.244.0.19:40941 - 28089 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074485s
	[INFO] 10.244.0.19:59445 - 15722 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000127448s
	[INFO] 10.244.0.19:33359 - 23752 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000060905s
	[INFO] 10.244.0.19:59445 - 58739 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000125109s
	[INFO] 10.244.0.19:33359 - 35555 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000249342s
	[INFO] 10.244.0.19:59445 - 43206 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000091567s
	[INFO] 10.244.0.19:33359 - 21164 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056426s
	[INFO] 10.244.0.19:59445 - 39206 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068183s
	[INFO] 10.244.0.19:33359 - 56248 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040951s
	[INFO] 10.244.0.19:59445 - 24178 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098961s
	[INFO] 10.244.0.19:33359 - 23263 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039334s
	[INFO] 10.244.0.19:33359 - 54712 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000102571s
	[INFO] 10.244.0.19:59445 - 5397 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000207808s
	[INFO] 10.244.0.19:59445 - 27925 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001705415s
	[INFO] 10.244.0.19:33359 - 48884 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001856157s
	[INFO] 10.244.0.19:33359 - 61603 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001554148s
	[INFO] 10.244.0.19:59445 - 41255 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001769455s
	[INFO] 10.244.0.19:33359 - 42832 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000397236s
	[INFO] 10.244.0.19:59445 - 51786 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000119989s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-624750
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-624750
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=addons-624750
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T21_42_02_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-624750
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Oct 2023 21:41:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-624750
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Oct 2023 21:45:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Oct 2023 21:45:05 +0000   Wed, 25 Oct 2023 21:41:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Oct 2023 21:45:05 +0000   Wed, 25 Oct 2023 21:41:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Oct 2023 21:45:05 +0000   Wed, 25 Oct 2023 21:41:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Oct 2023 21:45:05 +0000   Wed, 25 Oct 2023 21:42:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-624750
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 bdddd8e350194a888afea81673aa4f60
	  System UUID:                b32a84d0-ca86-41bf-a526-ba1e9057853c
	  Boot ID:                    dc9d99ba-cdb2-4b53-84d7-7ab685ba34f1
	  Kernel Version:             5.15.0-1048-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.24
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-t4wl4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-d4c87556c-t5848                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  headlamp                    headlamp-94b766c-gf5sk                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 coredns-5dd5756b68-n7t69                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m13s
	  kube-system                 etcd-addons-624750                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m25s
	  kube-system                 kindnet-82wq4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m13s
	  kube-system                 kube-apiserver-addons-624750             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-controller-manager-addons-624750    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 kube-proxy-wwszj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                 kube-scheduler-addons-624750             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m11s                  kube-proxy       
	  Normal  Starting                 3m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m32s (x8 over 3m32s)  kubelet          Node addons-624750 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m32s (x8 over 3m32s)  kubelet          Node addons-624750 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m32s (x7 over 3m32s)  kubelet          Node addons-624750 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m25s                  kubelet          Node addons-624750 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s                  kubelet          Node addons-624750 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s                  kubelet          Node addons-624750 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m25s                  kubelet          Node addons-624750 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m25s                  kubelet          Node addons-624750 status is now: NodeReady
	  Normal  RegisteredNode           3m13s                  node-controller  Node addons-624750 event: Registered Node addons-624750 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000967] FS-Cache: N-cookie d=00000000660f3c89{9p.inode} n=0000000055143906
	[  +0.001070] FS-Cache: N-key=[8] '7f385c0100000000'
	[  +0.002479] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000970] FS-Cache: O-cookie d=00000000660f3c89{9p.inode} n=000000003d2d56a7
	[  +0.001062] FS-Cache: O-key=[8] '7f385c0100000000'
	[  +0.000718] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000953] FS-Cache: N-cookie d=00000000660f3c89{9p.inode} n=0000000095370f1c
	[  +0.001050] FS-Cache: N-key=[8] '7f385c0100000000'
	[  +2.498939] FS-Cache: Duplicate cookie detected
	[  +0.000876] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001020] FS-Cache: O-cookie d=00000000660f3c89{9p.inode} n=00000000e7851712
	[  +0.001091] FS-Cache: O-key=[8] '7e385c0100000000'
	[  +0.000722] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=00000000660f3c89{9p.inode} n=0000000055143906
	[  +0.001062] FS-Cache: N-key=[8] '7e385c0100000000'
	[  +0.474029] FS-Cache: Duplicate cookie detected
	[  +0.000905] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000986] FS-Cache: O-cookie d=00000000660f3c89{9p.inode} n=00000000dd16fe58
	[  +0.001079] FS-Cache: O-key=[8] '89385c0100000000'
	[  +0.000707] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.001009] FS-Cache: N-cookie d=00000000660f3c89{9p.inode} n=000000000ab10f82
	[  +0.001047] FS-Cache: N-key=[8] '89385c0100000000'
	[Oct25 20:38] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct25 21:18] hrtimer: interrupt took 74755382 ns
	
	* 
	* ==> etcd [8373e4b39e248d548d9dfcee07c9f398b8456f941af57dab96b66cbf12e2e618] <==
	* {"level":"info","ts":"2023-10-25T21:41:54.998138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-10-25T21:41:54.998333Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-10-25T21:41:55.000834Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-25T21:41:55.002483Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-25T21:41:55.000738Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-25T21:41:55.00382Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-25T21:41:55.00394Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-25T21:41:55.036101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-25T21:41:55.036348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-25T21:41:55.036438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-10-25T21:41:55.03658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-10-25T21:41:55.036668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-25T21:41:55.036759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-10-25T21:41:55.036835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-25T21:41:55.037615Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:41:55.038124Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-624750 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-25T21:41:55.038306Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-25T21:41:55.039139Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:41:55.039519Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:41:55.040254Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-25T21:41:55.039625Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:41:55.050892Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-25T21:41:55.054329Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-25T21:41:55.058038Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-25T21:41:55.058272Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [d6c9dfa9a54cb5e38cd5aa1571cd103a8b2b6a0e63f19f20eb336e4716a1ad67] <==
	* 2023/10/25 21:43:12 GCP Auth Webhook started!
	2023/10/25 21:43:40 Ready to marshal response ...
	2023/10/25 21:43:40 Ready to write response ...
	2023/10/25 21:43:41 Ready to marshal response ...
	2023/10/25 21:43:41 Ready to write response ...
	2023/10/25 21:43:44 Ready to marshal response ...
	2023/10/25 21:43:44 Ready to write response ...
	2023/10/25 21:43:49 Ready to marshal response ...
	2023/10/25 21:43:49 Ready to write response ...
	2023/10/25 21:43:56 Ready to marshal response ...
	2023/10/25 21:43:56 Ready to write response ...
	2023/10/25 21:43:56 Ready to marshal response ...
	2023/10/25 21:43:56 Ready to write response ...
	2023/10/25 21:43:56 Ready to marshal response ...
	2023/10/25 21:43:56 Ready to write response ...
	2023/10/25 21:44:21 Ready to marshal response ...
	2023/10/25 21:44:21 Ready to write response ...
	2023/10/25 21:44:35 Ready to marshal response ...
	2023/10/25 21:44:35 Ready to write response ...
	2023/10/25 21:44:51 Ready to marshal response ...
	2023/10/25 21:44:51 Ready to write response ...
	2023/10/25 21:45:00 Ready to marshal response ...
	2023/10/25 21:45:00 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:45:26 up  1:27,  0 users,  load average: 1.81, 2.32, 2.89
	Linux addons-624750 5.15.0-1048-aws #53~20.04.1-Ubuntu SMP Wed Oct 4 16:51:38 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [07fed9e92b32ef71a545f24150b1e08da8fd6d891e7686ad008c37390db93810] <==
	* I1025 21:43:25.697940       1 main.go:227] handling current node
	I1025 21:43:35.701862       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:43:35.701893       1 main.go:227] handling current node
	I1025 21:43:45.706103       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:43:45.706132       1 main.go:227] handling current node
	I1025 21:43:55.717959       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:43:55.718656       1 main.go:227] handling current node
	I1025 21:44:05.724851       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:44:05.724880       1 main.go:227] handling current node
	I1025 21:44:15.737347       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:44:15.737377       1 main.go:227] handling current node
	I1025 21:44:25.750566       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:44:25.750593       1 main.go:227] handling current node
	I1025 21:44:35.763076       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:44:35.763105       1 main.go:227] handling current node
	I1025 21:44:45.768309       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:44:45.768337       1 main.go:227] handling current node
	I1025 21:44:55.780801       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:44:55.780826       1 main.go:227] handling current node
	I1025 21:45:05.792722       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:45:05.792750       1 main.go:227] handling current node
	I1025 21:45:15.796577       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:45:15.796607       1 main.go:227] handling current node
	I1025 21:45:25.808877       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:45:25.808920       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [d325c49a59ba5e0601d8caf36a022b6408747f51055d606d09873edba1042d97] <==
	* I1025 21:44:44.709278       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1025 21:44:45.722360       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1025 21:44:50.868204       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 21:44:51.135441       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:44:51.135487       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:44:51.152580       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:44:51.152655       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:44:51.230636       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:44:51.230953       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:44:51.255905       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:44:51.255959       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:44:51.297393       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:44:51.297446       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:44:51.319062       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:44:51.319115       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:44:51.341379       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.240.162"}
	I1025 21:44:51.357846       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:44:51.357911       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:44:51.364641       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:44:51.364690       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1025 21:44:52.256000       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1025 21:44:52.365832       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1025 21:44:52.385865       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1025 21:45:00.728284       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.137.47"}
	I1025 21:45:21.650652       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	* 
	* ==> kube-controller-manager [f8224d38a2480ac54fcf69c845beb66af9126a0914a52c6c988d5a57e81f4215] <==
	* W1025 21:45:01.272864       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:45:01.272899       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1025 21:45:01.629863       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:45:01.629909       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1025 21:45:03.323947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.462µs"
	I1025 21:45:04.330941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.658µs"
	I1025 21:45:05.331539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.873µs"
	W1025 21:45:09.278031       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:45:09.278068       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1025 21:45:09.538495       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:45:09.538530       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1025 21:45:12.942990       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:45:12.943025       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1025 21:45:14.009862       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1025 21:45:14.010376       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 21:45:14.488671       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1025 21:45:14.488721       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 21:45:18.062512       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1025 21:45:18.070222       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1025 21:45:18.070693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6f48fc54bd" duration="4.767µs"
	I1025 21:45:18.366968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="102.603µs"
	W1025 21:45:25.377574       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:45:25.377611       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1025 21:45:25.854579       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:45:25.854622       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [9ba7c92d752cb9cc69a06aaf9a11d77426d83632753582a2eba660f1fe923351] <==
	* I1025 21:42:15.323883       1 server_others.go:69] "Using iptables proxy"
	I1025 21:42:15.348238       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1025 21:42:15.518667       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 21:42:15.525981       1 server_others.go:152] "Using iptables Proxier"
	I1025 21:42:15.526026       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 21:42:15.526035       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 21:42:15.526084       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 21:42:15.526302       1 server.go:846] "Version info" version="v1.28.3"
	I1025 21:42:15.526312       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 21:42:15.527714       1 config.go:188] "Starting service config controller"
	I1025 21:42:15.527727       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 21:42:15.527758       1 config.go:97] "Starting endpoint slice config controller"
	I1025 21:42:15.527761       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 21:42:15.528100       1 config.go:315] "Starting node config controller"
	I1025 21:42:15.528107       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 21:42:15.629252       1 shared_informer.go:318] Caches are synced for node config
	I1025 21:42:15.629281       1 shared_informer.go:318] Caches are synced for service config
	I1025 21:42:15.629307       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c0d6784572c9af95cac67d6de7b0fa1b47df6fbc231256785c06672335f01752] <==
	* W1025 21:41:58.881951       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 21:41:58.882718       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 21:41:58.881997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 21:41:58.882891       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1025 21:41:58.882032       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 21:41:58.883050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1025 21:41:58.882068       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 21:41:58.883194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 21:41:58.882162       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 21:41:58.883348       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 21:41:58.882214       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 21:41:58.882251       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 21:41:58.882283       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 21:41:58.882313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 21:41:58.882451       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 21:41:58.883619       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 21:41:58.883668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 21:41:58.883862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 21:41:58.883872       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 21:41:58.884065       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 21:41:58.885422       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 21:41:58.885613       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 21:41:58.885587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 21:41:58.885842       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1025 21:42:00.172733       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 25 21:45:05 addons-624750 kubelet[1349]: E1025 21:45:05.319370    1349 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-t4wl4_default(6aa8730e-95f9-4a4d-a874-09ed56310ee4)\"" pod="default/hello-world-app-5d77478584-t4wl4" podUID="6aa8730e-95f9-4a4d-a874-09ed56310ee4"
	Oct 25 21:45:06 addons-624750 kubelet[1349]: I1025 21:45:06.133637    1349 scope.go:117] "RemoveContainer" containerID="d5f9636e61ca019a65d141de8e0ffaf49c6f91f70366c49f81f6c9e8b1e4f197"
	Oct 25 21:45:06 addons-624750 kubelet[1349]: E1025 21:45:06.134091    1349 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(0edc7015-fa81-45c8-aa1e-0d098a17dfb0)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="0edc7015-fa81-45c8-aa1e-0d098a17dfb0"
	Oct 25 21:45:16 addons-624750 kubelet[1349]: I1025 21:45:16.765797    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z644n\" (UniqueName: \"kubernetes.io/projected/0edc7015-fa81-45c8-aa1e-0d098a17dfb0-kube-api-access-z644n\") pod \"0edc7015-fa81-45c8-aa1e-0d098a17dfb0\" (UID: \"0edc7015-fa81-45c8-aa1e-0d098a17dfb0\") "
	Oct 25 21:45:16 addons-624750 kubelet[1349]: I1025 21:45:16.771638    1349 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0edc7015-fa81-45c8-aa1e-0d098a17dfb0-kube-api-access-z644n" (OuterVolumeSpecName: "kube-api-access-z644n") pod "0edc7015-fa81-45c8-aa1e-0d098a17dfb0" (UID: "0edc7015-fa81-45c8-aa1e-0d098a17dfb0"). InnerVolumeSpecName "kube-api-access-z644n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 25 21:45:16 addons-624750 kubelet[1349]: I1025 21:45:16.867133    1349 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z644n\" (UniqueName: \"kubernetes.io/projected/0edc7015-fa81-45c8-aa1e-0d098a17dfb0-kube-api-access-z644n\") on node \"addons-624750\" DevicePath \"\""
	Oct 25 21:45:17 addons-624750 kubelet[1349]: I1025 21:45:17.348815    1349 scope.go:117] "RemoveContainer" containerID="d5f9636e61ca019a65d141de8e0ffaf49c6f91f70366c49f81f6c9e8b1e4f197"
	Oct 25 21:45:18 addons-624750 kubelet[1349]: I1025 21:45:18.133494    1349 scope.go:117] "RemoveContainer" containerID="6c021779740c7b099edaea14e94213ed92fa923b6f73f8905808cc621aeba507"
	Oct 25 21:45:18 addons-624750 kubelet[1349]: I1025 21:45:18.353719    1349 scope.go:117] "RemoveContainer" containerID="6c021779740c7b099edaea14e94213ed92fa923b6f73f8905808cc621aeba507"
	Oct 25 21:45:18 addons-624750 kubelet[1349]: I1025 21:45:18.354080    1349 scope.go:117] "RemoveContainer" containerID="4903687aa573154d69200f24ebfc27df46bccf445fd62ae8239d3f5f9c156532"
	Oct 25 21:45:18 addons-624750 kubelet[1349]: E1025 21:45:18.354352    1349 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-t4wl4_default(6aa8730e-95f9-4a4d-a874-09ed56310ee4)\"" pod="default/hello-world-app-5d77478584-t4wl4" podUID="6aa8730e-95f9-4a4d-a874-09ed56310ee4"
	Oct 25 21:45:19 addons-624750 kubelet[1349]: I1025 21:45:19.136338    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0edc7015-fa81-45c8-aa1e-0d098a17dfb0" path="/var/lib/kubelet/pods/0edc7015-fa81-45c8-aa1e-0d098a17dfb0/volumes"
	Oct 25 21:45:19 addons-624750 kubelet[1349]: I1025 21:45:19.137841    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c480b7f7-4fce-4c75-8062-2f18ec96d035" path="/var/lib/kubelet/pods/c480b7f7-4fce-4c75-8062-2f18ec96d035/volumes"
	Oct 25 21:45:19 addons-624750 kubelet[1349]: I1025 21:45:19.139574    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ce7606ed-48cc-403c-a6f5-65eba7266794" path="/var/lib/kubelet/pods/ce7606ed-48cc-403c-a6f5-65eba7266794/volumes"
	Oct 25 21:45:21 addons-624750 kubelet[1349]: I1025 21:45:21.380128    1349 scope.go:117] "RemoveContainer" containerID="19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f"
	Oct 25 21:45:21 addons-624750 kubelet[1349]: I1025 21:45:21.387489    1349 scope.go:117] "RemoveContainer" containerID="19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f"
	Oct 25 21:45:21 addons-624750 kubelet[1349]: E1025 21:45:21.388037    1349 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f\": not found" containerID="19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f"
	Oct 25 21:45:21 addons-624750 kubelet[1349]: I1025 21:45:21.388089    1349 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f"} err="failed to get container status \"19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f\": rpc error: code = NotFound desc = an error occurred when try to find container \"19ddde21e3e5e5000992e3192393c82385779b0fd559b348fea4166085b5187f\": not found"
	Oct 25 21:45:21 addons-624750 kubelet[1349]: I1025 21:45:21.498044    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05daa84f-b342-4b02-98e1-f1212cee0926-webhook-cert\") pod \"05daa84f-b342-4b02-98e1-f1212cee0926\" (UID: \"05daa84f-b342-4b02-98e1-f1212cee0926\") "
	Oct 25 21:45:21 addons-624750 kubelet[1349]: I1025 21:45:21.498107    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntgpg\" (UniqueName: \"kubernetes.io/projected/05daa84f-b342-4b02-98e1-f1212cee0926-kube-api-access-ntgpg\") pod \"05daa84f-b342-4b02-98e1-f1212cee0926\" (UID: \"05daa84f-b342-4b02-98e1-f1212cee0926\") "
	Oct 25 21:45:21 addons-624750 kubelet[1349]: I1025 21:45:21.500798    1349 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05daa84f-b342-4b02-98e1-f1212cee0926-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "05daa84f-b342-4b02-98e1-f1212cee0926" (UID: "05daa84f-b342-4b02-98e1-f1212cee0926"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 25 21:45:21 addons-624750 kubelet[1349]: I1025 21:45:21.505226    1349 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05daa84f-b342-4b02-98e1-f1212cee0926-kube-api-access-ntgpg" (OuterVolumeSpecName: "kube-api-access-ntgpg") pod "05daa84f-b342-4b02-98e1-f1212cee0926" (UID: "05daa84f-b342-4b02-98e1-f1212cee0926"). InnerVolumeSpecName "kube-api-access-ntgpg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 25 21:45:21 addons-624750 kubelet[1349]: I1025 21:45:21.598978    1349 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05daa84f-b342-4b02-98e1-f1212cee0926-webhook-cert\") on node \"addons-624750\" DevicePath \"\""
	Oct 25 21:45:21 addons-624750 kubelet[1349]: I1025 21:45:21.599032    1349 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ntgpg\" (UniqueName: \"kubernetes.io/projected/05daa84f-b342-4b02-98e1-f1212cee0926-kube-api-access-ntgpg\") on node \"addons-624750\" DevicePath \"\""
	Oct 25 21:45:23 addons-624750 kubelet[1349]: I1025 21:45:23.135755    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="05daa84f-b342-4b02-98e1-f1212cee0926" path="/var/lib/kubelet/pods/05daa84f-b342-4b02-98e1-f1212cee0926/volumes"
	
	* 
	* ==> storage-provisioner [636716017ad1efe37aae86cd9468f64fb14b5f67fe051a0914c5819da1100b92] <==
	* I1025 21:42:20.706475       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 21:42:20.747991       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 21:42:20.748178       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 21:42:20.774986       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 21:42:20.777308       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-624750_792f7db3-fc81-4854-9e1d-d7559afa4a0f!
	I1025 21:42:20.778371       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2274de4c-b305-4721-a237-bf8bcc3eb166", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-624750_792f7db3-fc81-4854-9e1d-d7559afa4a0f became leader
	I1025 21:42:20.878593       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-624750_792f7db3-fc81-4854-9e1d-d7559afa4a0f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-624750 -n addons-624750
helpers_test.go:261: (dbg) Run:  kubectl --context addons-624750 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (37.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image load --daemon gcr.io/google-containers/addon-resizer:functional-934322 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-934322 image load --daemon gcr.io/google-containers/addon-resizer:functional-934322 --alsologtostderr: (3.744171772s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-934322" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image load --daemon gcr.io/google-containers/addon-resizer:functional-934322 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-934322 image load --daemon gcr.io/google-containers/addon-resizer:functional-934322 --alsologtostderr: (3.209636308s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-934322" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.636037248s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-934322
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image load --daemon gcr.io/google-containers/addon-resizer:functional-934322 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-934322 image load --daemon gcr.io/google-containers/addon-resizer:functional-934322 --alsologtostderr: (3.335580707s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-934322" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image save gcr.io/google-containers/addon-resizer:functional-934322 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1025 21:50:25.406260  436496 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:50:25.406449  436496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:50:25.406459  436496 out.go:309] Setting ErrFile to fd 2...
	I1025 21:50:25.406465  436496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:50:25.406739  436496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
	I1025 21:50:25.407402  436496 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1025 21:50:25.407534  436496 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1025 21:50:25.408153  436496 cli_runner.go:164] Run: docker container inspect functional-934322 --format={{.State.Status}}
	I1025 21:50:25.433215  436496 ssh_runner.go:195] Run: systemctl --version
	I1025 21:50:25.433321  436496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-934322
	I1025 21:50:25.457275  436496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/functional-934322/id_rsa Username:docker}
	I1025 21:50:25.554902  436496 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W1025 21:50:25.554955  436496 cache_images.go:254] Failed to load cached images for profile functional-934322. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I1025 21:50:25.554973  436496 cache_images.go:262] succeeded pushing to: 
	I1025 21:50:25.554978  436496 cache_images.go:263] failed pushing to: functional-934322

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (48.14s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-356915 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-356915 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.640691076s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-356915 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-356915 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c35898e3-27dc-4352-9498-f61d58cf6376] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c35898e3-27dc-4352-9498-f61d58cf6376] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.013134878s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-356915 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-356915 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-356915 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.021151644s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-356915 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-356915 addons disable ingress-dns --alsologtostderr -v=1: (3.849155268s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-356915 addons disable ingress --alsologtostderr -v=1
E1025 21:53:34.259100  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-356915 addons disable ingress --alsologtostderr -v=1: (7.583158336s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-356915
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-356915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "952311f321303040a6a0acfce42a9e1051bfee757d3176f8fa6cd617645ff599",
	        "Created": "2023-10-25T21:51:29.494054333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 441165,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-25T21:51:29.79770089Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5b0caed01db498fc255865f87f2d678d2b2e04ba0f7d056894d23da26cbc249a",
	        "ResolvConfPath": "/var/lib/docker/containers/952311f321303040a6a0acfce42a9e1051bfee757d3176f8fa6cd617645ff599/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/952311f321303040a6a0acfce42a9e1051bfee757d3176f8fa6cd617645ff599/hostname",
	        "HostsPath": "/var/lib/docker/containers/952311f321303040a6a0acfce42a9e1051bfee757d3176f8fa6cd617645ff599/hosts",
	        "LogPath": "/var/lib/docker/containers/952311f321303040a6a0acfce42a9e1051bfee757d3176f8fa6cd617645ff599/952311f321303040a6a0acfce42a9e1051bfee757d3176f8fa6cd617645ff599-json.log",
	        "Name": "/ingress-addon-legacy-356915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-356915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-356915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f1c349e208dc0559b82fa3dd61aebdd2e63db851be2ad26c92806f1f81124fc6-init/diff:/var/lib/docker/overlay2/72a373cc1a648bd482c91a7d51c6d15fd52c6262ee2446bc4493d43e0c8c95ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f1c349e208dc0559b82fa3dd61aebdd2e63db851be2ad26c92806f1f81124fc6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f1c349e208dc0559b82fa3dd61aebdd2e63db851be2ad26c92806f1f81124fc6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f1c349e208dc0559b82fa3dd61aebdd2e63db851be2ad26c92806f1f81124fc6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-356915",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-356915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-356915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-356915",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-356915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4e94ef1dcae66f1fb88e30d5564539768b385462ba2ae8fc48db42a924bc6c90",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4e94ef1dcae6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-356915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "952311f32130",
	                        "ingress-addon-legacy-356915"
	                    ],
	                    "NetworkID": "e4bf4208ae4128fb7b4388ed72b1827c5b2e5abade8242baf6f4589b06e0b91c",
	                    "EndpointID": "dd51877f033a2483bf071e92be567f13cc85c3e42927e40fb59b7ed638cceeac",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-356915 -n ingress-addon-legacy-356915
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-356915 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-356915 logs -n 25: (1.463792201s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-934322 ssh findmnt                                          | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:50 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-934322                                                   | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:50 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3634719173/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-934322                                                   | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:50 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3634719173/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-934322 ssh findmnt                                          | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:50 UTC | 25 Oct 23 21:50 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-934322 ssh findmnt                                          | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:50 UTC | 25 Oct 23 21:50 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-934322 ssh findmnt                                          | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:50 UTC | 25 Oct 23 21:50 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-934322                                                   | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:50 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-934322                                                      | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:51 UTC | 25 Oct 23 21:51 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-934322                                                      | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:51 UTC | 25 Oct 23 21:51 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-934322                                                      | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:51 UTC | 25 Oct 23 21:51 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-934322                                                      | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:51 UTC | 25 Oct 23 21:51 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-934322                                                      | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:51 UTC | 25 Oct 23 21:51 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-934322 ssh pgrep                                            | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:51 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-934322 image build -t                                       | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:51 UTC | 25 Oct 23 21:51 UTC |
	|                | localhost/my-image:functional-934322                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-934322 image ls                                             | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:51 UTC | 25 Oct 23 21:51 UTC |
	| image          | functional-934322                                                      | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:51 UTC | 25 Oct 23 21:51 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-934322                                                      | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:51 UTC | 25 Oct 23 21:51 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| delete         | -p functional-934322                                                   | functional-934322           | jenkins | v1.31.2 | 25 Oct 23 21:51 UTC | 25 Oct 23 21:51 UTC |
	| start          | -p ingress-addon-legacy-356915                                         | ingress-addon-legacy-356915 | jenkins | v1.31.2 | 25 Oct 23 21:51 UTC | 25 Oct 23 21:52 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=containerd                                         |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-356915                                            | ingress-addon-legacy-356915 | jenkins | v1.31.2 | 25 Oct 23 21:52 UTC | 25 Oct 23 21:52 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-356915                                            | ingress-addon-legacy-356915 | jenkins | v1.31.2 | 25 Oct 23 21:52 UTC | 25 Oct 23 21:52 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-356915                                            | ingress-addon-legacy-356915 | jenkins | v1.31.2 | 25 Oct 23 21:53 UTC | 25 Oct 23 21:53 UTC |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-356915 ip                                         | ingress-addon-legacy-356915 | jenkins | v1.31.2 | 25 Oct 23 21:53 UTC | 25 Oct 23 21:53 UTC |
	| addons         | ingress-addon-legacy-356915                                            | ingress-addon-legacy-356915 | jenkins | v1.31.2 | 25 Oct 23 21:53 UTC | 25 Oct 23 21:53 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-356915                                            | ingress-addon-legacy-356915 | jenkins | v1.31.2 | 25 Oct 23 21:53 UTC | 25 Oct 23 21:53 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 21:51:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:51:09.913759  440708 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:51:09.913947  440708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:51:09.913977  440708 out.go:309] Setting ErrFile to fd 2...
	I1025 21:51:09.914001  440708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:51:09.914288  440708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
	I1025 21:51:09.914747  440708 out.go:303] Setting JSON to false
	I1025 21:51:09.915884  440708 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5607,"bootTime":1698265063,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 21:51:09.915993  440708 start.go:138] virtualization:  
	I1025 21:51:09.918663  440708 out.go:177] * [ingress-addon-legacy-356915] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1025 21:51:09.920950  440708 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:51:09.921144  440708 notify.go:220] Checking for updates...
	I1025 21:51:09.924416  440708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:51:09.926205  440708 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	I1025 21:51:09.928229  440708 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	I1025 21:51:09.930179  440708 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 21:51:09.932150  440708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:51:09.934431  440708 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:51:09.958819  440708 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:51:09.958931  440708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:51:10.046372  440708 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-25 21:51:10.033808085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 21:51:10.048586  440708 docker.go:295] overlay module found
	I1025 21:51:10.052050  440708 out.go:177] * Using the docker driver based on user configuration
	I1025 21:51:10.054405  440708 start.go:298] selected driver: docker
	I1025 21:51:10.054450  440708 start.go:902] validating driver "docker" against <nil>
	I1025 21:51:10.054467  440708 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:51:10.055236  440708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:51:10.132989  440708 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-25 21:51:10.122448967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 21:51:10.133293  440708 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 21:51:10.133556  440708 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:51:10.135949  440708 out.go:177] * Using Docker driver with root privileges
	I1025 21:51:10.138100  440708 cni.go:84] Creating CNI manager for ""
	I1025 21:51:10.138128  440708 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1025 21:51:10.138145  440708 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 21:51:10.138157  440708 start_flags.go:323] config:
	{Name:ingress-addon-legacy-356915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-356915 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:51:10.140559  440708 out.go:177] * Starting control plane node ingress-addon-legacy-356915 in cluster ingress-addon-legacy-356915
	I1025 21:51:10.142295  440708 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1025 21:51:10.144654  440708 out.go:177] * Pulling base image ...
	I1025 21:51:10.147016  440708 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 21:51:10.146975  440708 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1025 21:51:10.169806  440708 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 21:51:10.169836  440708 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 21:51:10.215544  440708 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1025 21:51:10.215569  440708 cache.go:56] Caching tarball of preloaded images
	I1025 21:51:10.215772  440708 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1025 21:51:10.218041  440708 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1025 21:51:10.219744  440708 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1025 21:51:10.331168  440708 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1025 21:51:21.616282  440708 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1025 21:51:21.616382  440708 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1025 21:51:22.811355  440708 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I1025 21:51:22.811736  440708 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/config.json ...
	I1025 21:51:22.811770  440708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/config.json: {Name:mk8042fbceb25dc72c03d23cbd8528fed2d42259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:51:22.811962  440708 cache.go:194] Successfully downloaded all kic artifacts
	I1025 21:51:22.812022  440708 start.go:365] acquiring machines lock for ingress-addon-legacy-356915: {Name:mk4fd72cd365a28298221983752c26e8c47e1517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:51:22.812091  440708 start.go:369] acquired machines lock for "ingress-addon-legacy-356915" in 50.289µs
	I1025 21:51:22.812113  440708 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-356915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-356915 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1025 21:51:22.812182  440708 start.go:125] createHost starting for "" (driver="docker")
	I1025 21:51:22.814392  440708 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 21:51:22.814695  440708 start.go:159] libmachine.API.Create for "ingress-addon-legacy-356915" (driver="docker")
	I1025 21:51:22.814720  440708 client.go:168] LocalClient.Create starting
	I1025 21:51:22.814783  440708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem
	I1025 21:51:22.814822  440708 main.go:141] libmachine: Decoding PEM data...
	I1025 21:51:22.814840  440708 main.go:141] libmachine: Parsing certificate...
	I1025 21:51:22.814932  440708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17488-401064/.minikube/certs/cert.pem
	I1025 21:51:22.814955  440708 main.go:141] libmachine: Decoding PEM data...
	I1025 21:51:22.814971  440708 main.go:141] libmachine: Parsing certificate...
	I1025 21:51:22.815316  440708 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-356915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 21:51:22.833157  440708 cli_runner.go:211] docker network inspect ingress-addon-legacy-356915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 21:51:22.833251  440708 network_create.go:281] running [docker network inspect ingress-addon-legacy-356915] to gather additional debugging logs...
	I1025 21:51:22.833274  440708 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-356915
	W1025 21:51:22.851181  440708 cli_runner.go:211] docker network inspect ingress-addon-legacy-356915 returned with exit code 1
	I1025 21:51:22.851217  440708 network_create.go:284] error running [docker network inspect ingress-addon-legacy-356915]: docker network inspect ingress-addon-legacy-356915: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-356915 not found
	I1025 21:51:22.851235  440708 network_create.go:286] output of [docker network inspect ingress-addon-legacy-356915]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-356915 not found
	
	** /stderr **
	I1025 21:51:22.851346  440708 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:51:22.869133  440708 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400050fce0}
	I1025 21:51:22.869176  440708 network_create.go:124] attempt to create docker network ingress-addon-legacy-356915 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 21:51:22.869238  440708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-356915 ingress-addon-legacy-356915
	I1025 21:51:22.943359  440708 network_create.go:108] docker network ingress-addon-legacy-356915 192.168.49.0/24 created
	I1025 21:51:22.943394  440708 kic.go:118] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-356915" container
	I1025 21:51:22.943469  440708 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 21:51:22.960692  440708 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-356915 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-356915 --label created_by.minikube.sigs.k8s.io=true
	I1025 21:51:22.979703  440708 oci.go:103] Successfully created a docker volume ingress-addon-legacy-356915
	I1025 21:51:22.979793  440708 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-356915-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-356915 --entrypoint /usr/bin/test -v ingress-addon-legacy-356915:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 21:51:24.467988  440708 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-356915-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-356915 --entrypoint /usr/bin/test -v ingress-addon-legacy-356915:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (1.488147379s)
	I1025 21:51:24.468019  440708 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-356915
	I1025 21:51:24.468036  440708 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1025 21:51:24.468058  440708 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 21:51:24.468138  440708 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-356915:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 21:51:29.405590  440708 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-356915:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (4.937403657s)
	I1025 21:51:29.405623  440708 kic.go:200] duration metric: took 4.937561 seconds to extract preloaded images to volume
	W1025 21:51:29.405775  440708 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 21:51:29.405882  440708 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 21:51:29.476940  440708 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-356915 --name ingress-addon-legacy-356915 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-356915 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-356915 --network ingress-addon-legacy-356915 --ip 192.168.49.2 --volume ingress-addon-legacy-356915:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 21:51:29.806996  440708 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-356915 --format={{.State.Running}}
	I1025 21:51:29.846917  440708 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-356915 --format={{.State.Status}}
	I1025 21:51:29.884745  440708 cli_runner.go:164] Run: docker exec ingress-addon-legacy-356915 stat /var/lib/dpkg/alternatives/iptables
	I1025 21:51:29.976773  440708 oci.go:144] the created container "ingress-addon-legacy-356915" has a running status.
	I1025 21:51:29.976809  440708 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17488-401064/.minikube/machines/ingress-addon-legacy-356915/id_rsa...
	I1025 21:51:30.247523  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/machines/ingress-addon-legacy-356915/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 21:51:30.247613  440708 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17488-401064/.minikube/machines/ingress-addon-legacy-356915/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 21:51:30.273976  440708 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-356915 --format={{.State.Status}}
	I1025 21:51:30.300725  440708 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 21:51:30.300757  440708 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-356915 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 21:51:30.400640  440708 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-356915 --format={{.State.Status}}
	I1025 21:51:30.424563  440708 machine.go:88] provisioning docker machine ...
	I1025 21:51:30.424595  440708 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-356915"
	I1025 21:51:30.424663  440708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-356915
	I1025 21:51:30.444611  440708 main.go:141] libmachine: Using SSH client type: native
	I1025 21:51:30.445044  440708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1025 21:51:30.445112  440708 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-356915 && echo "ingress-addon-legacy-356915" | sudo tee /etc/hostname
	I1025 21:51:30.449321  440708 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 21:51:33.604126  440708 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-356915
	
	I1025 21:51:33.604213  440708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-356915
	I1025 21:51:33.622914  440708 main.go:141] libmachine: Using SSH client type: native
	I1025 21:51:33.623322  440708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1025 21:51:33.623346  440708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-356915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-356915/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-356915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 21:51:33.762723  440708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:51:33.762748  440708 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17488-401064/.minikube CaCertPath:/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17488-401064/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17488-401064/.minikube}
	I1025 21:51:33.762767  440708 ubuntu.go:177] setting up certificates
	I1025 21:51:33.762775  440708 provision.go:83] configureAuth start
	I1025 21:51:33.762963  440708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-356915
	I1025 21:51:33.781662  440708 provision.go:138] copyHostCerts
	I1025 21:51:33.781699  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17488-401064/.minikube/ca.pem
	I1025 21:51:33.781728  440708 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-401064/.minikube/ca.pem, removing ...
	I1025 21:51:33.781744  440708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-401064/.minikube/ca.pem
	I1025 21:51:33.781820  440708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17488-401064/.minikube/ca.pem (1082 bytes)
	I1025 21:51:33.781896  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17488-401064/.minikube/cert.pem
	I1025 21:51:33.781916  440708 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-401064/.minikube/cert.pem, removing ...
	I1025 21:51:33.781921  440708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-401064/.minikube/cert.pem
	I1025 21:51:33.781952  440708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17488-401064/.minikube/cert.pem (1123 bytes)
	I1025 21:51:33.781998  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17488-401064/.minikube/key.pem
	I1025 21:51:33.782019  440708 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-401064/.minikube/key.pem, removing ...
	I1025 21:51:33.782027  440708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-401064/.minikube/key.pem
	I1025 21:51:33.782051  440708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17488-401064/.minikube/key.pem (1675 bytes)
	I1025 21:51:33.782095  440708 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17488-401064/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-356915 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-356915]
	I1025 21:51:34.468149  440708 provision.go:172] copyRemoteCerts
	I1025 21:51:34.468216  440708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 21:51:34.468259  440708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-356915
	I1025 21:51:34.485995  440708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/ingress-addon-legacy-356915/id_rsa Username:docker}
	I1025 21:51:34.583468  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 21:51:34.583530  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 21:51:34.612260  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 21:51:34.612331  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1025 21:51:34.641906  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 21:51:34.641973  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 21:51:34.669661  440708 provision.go:86] duration metric: configureAuth took 906.823852ms
	I1025 21:51:34.669686  440708 ubuntu.go:193] setting minikube options for container-runtime
	I1025 21:51:34.669895  440708 config.go:182] Loaded profile config "ingress-addon-legacy-356915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1025 21:51:34.669906  440708 machine.go:91] provisioned docker machine in 4.245325022s
	I1025 21:51:34.669913  440708 client.go:171] LocalClient.Create took 11.855188039s
	I1025 21:51:34.669932  440708 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-356915" took 11.855236613s
	I1025 21:51:34.669945  440708 start.go:300] post-start starting for "ingress-addon-legacy-356915" (driver="docker")
	I1025 21:51:34.669954  440708 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 21:51:34.670013  440708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 21:51:34.670054  440708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-356915
	I1025 21:51:34.687569  440708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/ingress-addon-legacy-356915/id_rsa Username:docker}
	I1025 21:51:34.788567  440708 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 21:51:34.792909  440708 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 21:51:34.792951  440708 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 21:51:34.792963  440708 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 21:51:34.792970  440708 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 21:51:34.792980  440708 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-401064/.minikube/addons for local assets ...
	I1025 21:51:34.793049  440708 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-401064/.minikube/files for local assets ...
	I1025 21:51:34.793165  440708 filesync.go:149] local asset: /home/jenkins/minikube-integration/17488-401064/.minikube/files/etc/ssl/certs/4064532.pem -> 4064532.pem in /etc/ssl/certs
	I1025 21:51:34.793176  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/files/etc/ssl/certs/4064532.pem -> /etc/ssl/certs/4064532.pem
	I1025 21:51:34.793332  440708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 21:51:34.804147  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/files/etc/ssl/certs/4064532.pem --> /etc/ssl/certs/4064532.pem (1708 bytes)
	I1025 21:51:34.833211  440708 start.go:303] post-start completed in 163.250003ms
	I1025 21:51:34.833592  440708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-356915
	I1025 21:51:34.852264  440708 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/config.json ...
	I1025 21:51:34.852551  440708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:51:34.852602  440708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-356915
	I1025 21:51:34.870948  440708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/ingress-addon-legacy-356915/id_rsa Username:docker}
	I1025 21:51:34.967079  440708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 21:51:34.972661  440708 start.go:128] duration metric: createHost completed in 12.16046225s
	I1025 21:51:34.972687  440708 start.go:83] releasing machines lock for "ingress-addon-legacy-356915", held for 12.160584448s
	I1025 21:51:34.972777  440708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-356915
	I1025 21:51:34.991843  440708 ssh_runner.go:195] Run: cat /version.json
	I1025 21:51:34.991870  440708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 21:51:34.991893  440708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-356915
	I1025 21:51:34.991940  440708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-356915
	I1025 21:51:35.016224  440708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/ingress-addon-legacy-356915/id_rsa Username:docker}
	I1025 21:51:35.019698  440708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/ingress-addon-legacy-356915/id_rsa Username:docker}
	I1025 21:51:35.243074  440708 ssh_runner.go:195] Run: systemctl --version
	I1025 21:51:35.248773  440708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 21:51:35.254545  440708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 21:51:35.285398  440708 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 21:51:35.285519  440708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:51:35.319945  440708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1025 21:51:35.319969  440708 start.go:472] detecting cgroup driver to use...
	I1025 21:51:35.320002  440708 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 21:51:35.320059  440708 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 21:51:35.334548  440708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 21:51:35.348491  440708 docker.go:198] disabling cri-docker service (if available) ...
	I1025 21:51:35.348588  440708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 21:51:35.364678  440708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 21:51:35.381776  440708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 21:51:35.489363  440708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 21:51:35.593438  440708 docker.go:214] disabling docker service ...
	I1025 21:51:35.593506  440708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 21:51:35.614899  440708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 21:51:35.628380  440708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 21:51:35.728387  440708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 21:51:35.831665  440708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 21:51:35.846911  440708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 21:51:35.866941  440708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1025 21:51:35.879397  440708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 21:51:35.892408  440708 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 21:51:35.892514  440708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 21:51:35.905385  440708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 21:51:35.918326  440708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 21:51:35.930755  440708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 21:51:35.943137  440708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 21:51:35.955080  440708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 21:51:35.966961  440708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 21:51:35.977305  440708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 21:51:35.987448  440708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:51:36.089491  440708 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 21:51:36.237651  440708 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1025 21:51:36.237768  440708 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1025 21:51:36.242558  440708 start.go:540] Will wait 60s for crictl version
	I1025 21:51:36.242649  440708 ssh_runner.go:195] Run: which crictl
	I1025 21:51:36.247036  440708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 21:51:36.291943  440708 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1025 21:51:36.292062  440708 ssh_runner.go:195] Run: containerd --version
	I1025 21:51:36.318631  440708 ssh_runner.go:195] Run: containerd --version
	I1025 21:51:36.353028  440708 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.24 ...
	I1025 21:51:36.355131  440708 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-356915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 21:51:36.372834  440708 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 21:51:36.377557  440708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:51:36.391115  440708 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1025 21:51:36.391189  440708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:51:36.434143  440708 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1025 21:51:36.434218  440708 ssh_runner.go:195] Run: which lz4
	I1025 21:51:36.438751  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1025 21:51:36.438858  440708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1025 21:51:36.443254  440708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 21:51:36.443289  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I1025 21:51:38.679129  440708 containerd.go:547] Took 2.240310 seconds to copy over tarball
	I1025 21:51:38.679216  440708 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 21:51:41.358876  440708 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.679630686s)
	I1025 21:51:41.358901  440708 containerd.go:554] Took 2.679747 seconds to extract the tarball
	I1025 21:51:41.358912  440708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 21:51:41.442222  440708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:51:41.543926  440708 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 21:51:41.686756  440708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:51:41.735437  440708 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1025 21:51:41.735459  440708 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 21:51:41.735498  440708 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:51:41.735693  440708 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1025 21:51:41.735784  440708 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 21:51:41.735857  440708 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1025 21:51:41.735934  440708 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1025 21:51:41.736019  440708 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1025 21:51:41.736077  440708 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1025 21:51:41.736129  440708 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1025 21:51:41.738349  440708 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 21:51:41.738790  440708 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1025 21:51:41.738966  440708 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1025 21:51:41.739097  440708 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1025 21:51:41.739226  440708 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1025 21:51:41.739351  440708 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:51:41.739589  440708 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1025 21:51:41.739841  440708 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	W1025 21:51:42.023022  440708 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1025 21:51:42.023247  440708 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.18.20"
	I1025 21:51:42.046504  440708 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.2"
	W1025 21:51:42.065779  440708 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1025 21:51:42.065999  440708 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.18.20"
	W1025 21:51:42.077925  440708 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1025 21:51:42.078128  440708 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.3-0"
	W1025 21:51:42.096658  440708 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1025 21:51:42.096824  440708 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.18.20"
	W1025 21:51:42.104844  440708 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1025 21:51:42.105087  440708 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns:1.6.7"
	W1025 21:51:42.108191  440708 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1025 21:51:42.108471  440708 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.18.20"
	W1025 21:51:42.313465  440708 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 21:51:42.313593  440708 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1025 21:51:42.405484  440708 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1025 21:51:42.405555  440708 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 21:51:42.405609  440708 ssh_runner.go:195] Run: which crictl
	I1025 21:51:42.633227  440708 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1025 21:51:42.633311  440708 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1025 21:51:42.633398  440708 ssh_runner.go:195] Run: which crictl
	I1025 21:51:42.805791  440708 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1025 21:51:42.805839  440708 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1025 21:51:42.805897  440708 ssh_runner.go:195] Run: which crictl
	I1025 21:51:42.903756  440708 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1025 21:51:42.903811  440708 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1025 21:51:42.903865  440708 ssh_runner.go:195] Run: which crictl
	I1025 21:51:42.903945  440708 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1025 21:51:42.903966  440708 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1025 21:51:42.904004  440708 ssh_runner.go:195] Run: which crictl
	I1025 21:51:42.950899  440708 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1025 21:51:42.950949  440708 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1025 21:51:42.951004  440708 ssh_runner.go:195] Run: which crictl
	I1025 21:51:42.979950  440708 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1025 21:51:42.980180  440708 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1025 21:51:42.980229  440708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1025 21:51:42.980234  440708 ssh_runner.go:195] Run: which crictl
	I1025 21:51:42.980148  440708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 21:51:42.980081  440708 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1025 21:51:42.980368  440708 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:51:42.980397  440708 ssh_runner.go:195] Run: which crictl
	I1025 21:51:42.980416  440708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1025 21:51:42.980462  440708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1025 21:51:42.980288  440708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1025 21:51:42.980509  440708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1025 21:51:43.115670  440708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:51:43.115768  440708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-401064/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1025 21:51:43.115815  440708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1025 21:51:43.115883  440708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-401064/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1025 21:51:43.149547  440708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-401064/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1025 21:51:43.149635  440708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-401064/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1025 21:51:43.149718  440708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-401064/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1025 21:51:43.149776  440708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-401064/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1025 21:51:43.213933  440708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-401064/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1025 21:51:43.214020  440708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-401064/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 21:51:43.214066  440708 cache_images.go:92] LoadImages completed in 1.478594824s
	W1025 21:51:43.214130  440708 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17488-401064/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I1025 21:51:43.214180  440708 ssh_runner.go:195] Run: sudo crictl info
	I1025 21:51:43.253813  440708 cni.go:84] Creating CNI manager for ""
	I1025 21:51:43.253837  440708 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1025 21:51:43.253867  440708 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 21:51:43.253888  440708 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-356915 NodeName:ingress-addon-legacy-356915 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1025 21:51:43.254020  440708 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-356915"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 21:51:43.254093  440708 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-356915 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-356915 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 21:51:43.254163  440708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1025 21:51:43.264861  440708 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 21:51:43.264934  440708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 21:51:43.275569  440708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I1025 21:51:43.296313  440708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1025 21:51:43.317818  440708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I1025 21:51:43.338464  440708 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 21:51:43.342999  440708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:51:43.356678  440708 certs.go:56] Setting up /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915 for IP: 192.168.49.2
	I1025 21:51:43.356765  440708 certs.go:190] acquiring lock for shared ca certs: {Name:mkce8239dfcf921c4b21f688c78784f182dcce0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:51:43.356942  440708 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17488-401064/.minikube/ca.key
	I1025 21:51:43.357019  440708 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.key
	I1025 21:51:43.357099  440708 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.key
	I1025 21:51:43.357115  440708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt with IP's: []
	I1025 21:51:43.590583  440708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt ...
	I1025 21:51:43.590613  440708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: {Name:mk50125564816483a6a84f6ed9c48caf9a2bf429 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:51:43.590808  440708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.key ...
	I1025 21:51:43.590821  440708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.key: {Name:mke2cf1d729cc3d583bbe3c6abc86267f24d3dcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:51:43.590916  440708 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.key.dd3b5fb2
	I1025 21:51:43.590934  440708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 21:51:44.037278  440708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.crt.dd3b5fb2 ...
	I1025 21:51:44.037316  440708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.crt.dd3b5fb2: {Name:mke46359ef9f0ed61124f55842cf071f43febd3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:51:44.037500  440708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.key.dd3b5fb2 ...
	I1025 21:51:44.037514  440708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.key.dd3b5fb2: {Name:mk84675a2b27120cc232ed42c81ac3172a9102a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:51:44.037599  440708 certs.go:337] copying /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.crt
	I1025 21:51:44.037679  440708 certs.go:341] copying /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.key
	I1025 21:51:44.037738  440708 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/proxy-client.key
	I1025 21:51:44.037755  440708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/proxy-client.crt with IP's: []
	I1025 21:51:44.628245  440708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/proxy-client.crt ...
	I1025 21:51:44.628277  440708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/proxy-client.crt: {Name:mk7a2c12faffaef740ad7ba44335ad00a67db77d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:51:44.628459  440708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/proxy-client.key ...
	I1025 21:51:44.628471  440708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/proxy-client.key: {Name:mk4f07bb831f82899946032299ae69219731c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:51:44.628558  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 21:51:44.628578  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 21:51:44.628590  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 21:51:44.628606  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 21:51:44.628625  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 21:51:44.628640  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 21:51:44.628655  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 21:51:44.628673  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 21:51:44.628729  440708 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/406453.pem (1338 bytes)
	W1025 21:51:44.628767  440708 certs.go:433] ignoring /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/406453_empty.pem, impossibly tiny 0 bytes
	I1025 21:51:44.628780  440708 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 21:51:44.628805  440708 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem (1082 bytes)
	I1025 21:51:44.628834  440708 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/cert.pem (1123 bytes)
	I1025 21:51:44.628863  440708 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/key.pem (1675 bytes)
	I1025 21:51:44.628915  440708 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17488-401064/.minikube/files/etc/ssl/certs/4064532.pem (1708 bytes)
	I1025 21:51:44.628946  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:51:44.628961  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/406453.pem -> /usr/share/ca-certificates/406453.pem
	I1025 21:51:44.628974  440708 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17488-401064/.minikube/files/etc/ssl/certs/4064532.pem -> /usr/share/ca-certificates/4064532.pem
	I1025 21:51:44.629555  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 21:51:44.658835  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 21:51:44.688047  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 21:51:44.716270  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 21:51:44.744922  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 21:51:44.773993  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 21:51:44.802295  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 21:51:44.829972  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 21:51:44.860218  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 21:51:44.888117  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/certs/406453.pem --> /usr/share/ca-certificates/406453.pem (1338 bytes)
	I1025 21:51:44.916447  440708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/files/etc/ssl/certs/4064532.pem --> /usr/share/ca-certificates/4064532.pem (1708 bytes)
	I1025 21:51:44.946925  440708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 21:51:44.968669  440708 ssh_runner.go:195] Run: openssl version
	I1025 21:51:44.975562  440708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 21:51:44.986997  440708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:51:44.991950  440708 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:51:44.992057  440708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:51:45.000842  440708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 21:51:45.014650  440708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/406453.pem && ln -fs /usr/share/ca-certificates/406453.pem /etc/ssl/certs/406453.pem"
	I1025 21:51:45.049242  440708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/406453.pem
	I1025 21:51:45.055086  440708 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:47 /usr/share/ca-certificates/406453.pem
	I1025 21:51:45.055221  440708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/406453.pem
	I1025 21:51:45.072267  440708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/406453.pem /etc/ssl/certs/51391683.0"
	I1025 21:51:45.087538  440708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4064532.pem && ln -fs /usr/share/ca-certificates/4064532.pem /etc/ssl/certs/4064532.pem"
	I1025 21:51:45.102454  440708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4064532.pem
	I1025 21:51:45.111316  440708 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:47 /usr/share/ca-certificates/4064532.pem
	I1025 21:51:45.111477  440708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4064532.pem
	I1025 21:51:45.122725  440708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4064532.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 21:51:45.143664  440708 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 21:51:45.151286  440708 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 21:51:45.151395  440708 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-356915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-356915 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:51:45.151524  440708 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1025 21:51:45.151623  440708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 21:51:45.219783  440708 cri.go:89] found id: ""
	I1025 21:51:45.219957  440708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 21:51:45.234624  440708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 21:51:45.248272  440708 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 21:51:45.248360  440708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 21:51:45.262309  440708 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 21:51:45.262380  440708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 21:51:45.345084  440708 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1025 21:51:45.345346  440708 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 21:51:45.405139  440708 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1025 21:51:45.405225  440708 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-aws
	I1025 21:51:45.405263  440708 kubeadm.go:322] OS: Linux
	I1025 21:51:45.405312  440708 kubeadm.go:322] CGROUPS_CPU: enabled
	I1025 21:51:45.405371  440708 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1025 21:51:45.405473  440708 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1025 21:51:45.405590  440708 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1025 21:51:45.405686  440708 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1025 21:51:45.405850  440708 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1025 21:51:45.503567  440708 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 21:51:45.503673  440708 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 21:51:45.503766  440708 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 21:51:45.759587  440708 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 21:51:45.759686  440708 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 21:51:45.759728  440708 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 21:51:45.861155  440708 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 21:51:45.864023  440708 out.go:204]   - Generating certificates and keys ...
	I1025 21:51:45.864191  440708 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 21:51:45.864608  440708 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 21:51:46.613942  440708 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 21:51:47.372282  440708 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 21:51:47.678579  440708 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 21:51:48.300560  440708 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 21:51:49.246054  440708 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 21:51:49.246545  440708 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-356915 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 21:51:49.555924  440708 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 21:51:49.556406  440708 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-356915 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 21:51:50.010462  440708 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 21:51:50.727123  440708 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 21:51:51.436266  440708 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 21:51:51.436669  440708 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 21:51:52.916077  440708 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 21:51:53.197080  440708 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 21:51:53.498424  440708 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 21:51:53.829528  440708 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 21:51:53.830582  440708 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 21:51:53.833083  440708 out.go:204]   - Booting up control plane ...
	I1025 21:51:53.833196  440708 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 21:51:53.840466  440708 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 21:51:53.849381  440708 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 21:51:53.849478  440708 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 21:51:53.850881  440708 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 21:52:06.853087  440708 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.002746 seconds
	I1025 21:52:06.853275  440708 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 21:52:06.868290  440708 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 21:52:07.403137  440708 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 21:52:07.403294  440708 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-356915 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1025 21:52:07.911290  440708 kubeadm.go:322] [bootstrap-token] Using token: rfpt5i.ewhflp54hk68ld0o
	I1025 21:52:07.913056  440708 out.go:204]   - Configuring RBAC rules ...
	I1025 21:52:07.913191  440708 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 21:52:07.919562  440708 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 21:52:07.928586  440708 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 21:52:07.931682  440708 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 21:52:07.934927  440708 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 21:52:07.938939  440708 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 21:52:07.951029  440708 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 21:52:08.369048  440708 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1025 21:52:08.413293  440708 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1025 21:52:08.415007  440708 kubeadm.go:322] 
	I1025 21:52:08.415079  440708 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1025 21:52:08.415089  440708 kubeadm.go:322] 
	I1025 21:52:08.415162  440708 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1025 21:52:08.415175  440708 kubeadm.go:322] 
	I1025 21:52:08.415200  440708 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1025 21:52:08.415261  440708 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 21:52:08.415315  440708 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 21:52:08.415323  440708 kubeadm.go:322] 
	I1025 21:52:08.415372  440708 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1025 21:52:08.415446  440708 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 21:52:08.415513  440708 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 21:52:08.415522  440708 kubeadm.go:322] 
	I1025 21:52:08.415600  440708 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 21:52:08.415675  440708 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1025 21:52:08.415683  440708 kubeadm.go:322] 
	I1025 21:52:08.415762  440708 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rfpt5i.ewhflp54hk68ld0o \
	I1025 21:52:08.415864  440708 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8fc893b1bfb9893856fcf0c2057305a384d09e522e58c2d24ef7688104c1c0c8 \
	I1025 21:52:08.415889  440708 kubeadm.go:322]     --control-plane 
	I1025 21:52:08.415894  440708 kubeadm.go:322] 
	I1025 21:52:08.415977  440708 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1025 21:52:08.415982  440708 kubeadm.go:322] 
	I1025 21:52:08.416061  440708 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rfpt5i.ewhflp54hk68ld0o \
	I1025 21:52:08.416163  440708 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8fc893b1bfb9893856fcf0c2057305a384d09e522e58c2d24ef7688104c1c0c8 
	I1025 21:52:08.419736  440708 kubeadm.go:322] W1025 21:51:45.343523    1111 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1025 21:52:08.419949  440708 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-aws\n", err: exit status 1
	I1025 21:52:08.420054  440708 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 21:52:08.420178  440708 kubeadm.go:322] W1025 21:51:53.840585    1111 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 21:52:08.420298  440708 kubeadm.go:322] W1025 21:51:53.842567    1111 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 21:52:08.420316  440708 cni.go:84] Creating CNI manager for ""
	I1025 21:52:08.420328  440708 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1025 21:52:08.422412  440708 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1025 21:52:08.424306  440708 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 21:52:08.429315  440708 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1025 21:52:08.429337  440708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1025 21:52:08.451384  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 21:52:08.889370  440708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 21:52:08.889513  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:08.889586  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc minikube.k8s.io/name=ingress-addon-legacy-356915 minikube.k8s.io/updated_at=2023_10_25T21_52_08_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:09.074154  440708 ops.go:34] apiserver oom_adj: -16
	I1025 21:52:09.074248  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:09.179986  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:09.789204  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:10.289754  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:10.789216  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:11.289851  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:11.789578  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:12.289624  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:12.790062  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:13.289904  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:13.789919  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:14.289225  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:14.789912  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:15.289926  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:15.789445  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:16.289527  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:16.789762  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:17.289917  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:17.789628  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:18.289192  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:18.789173  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:19.289768  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:19.789856  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:20.289813  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:20.789921  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:21.289918  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:21.789779  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:22.289881  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:22.789941  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:23.289443  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:23.789263  440708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:52:23.911283  440708 kubeadm.go:1081] duration metric: took 15.021820478s to wait for elevateKubeSystemPrivileges.
	I1025 21:52:23.911313  440708 kubeadm.go:406] StartCluster complete in 38.759922765s
	I1025 21:52:23.911331  440708 settings.go:142] acquiring lock: {Name:mk9df4aad1a9be3e880e7cbb06d6b12a9835859c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:52:23.911399  440708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17488-401064/kubeconfig
	I1025 21:52:23.912164  440708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/kubeconfig: {Name:mk815098196b1e4c9adc580a5ae817d2d2e4d151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:52:23.912939  440708 kapi.go:59] client config for ingress-addon-legacy-356915: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt", KeyFile:"/home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.key", CAFile:"/home/jenkins/minikube-integration/17488-401064/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 21:52:23.913153  440708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 21:52:23.913440  440708 config.go:182] Loaded profile config "ingress-addon-legacy-356915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1025 21:52:23.913552  440708 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 21:52:23.913624  440708 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-356915"
	I1025 21:52:23.913643  440708 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-356915"
	I1025 21:52:23.913695  440708 host.go:66] Checking if "ingress-addon-legacy-356915" exists ...
	I1025 21:52:23.914168  440708 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-356915 --format={{.State.Status}}
	I1025 21:52:23.914338  440708 cert_rotation.go:137] Starting client certificate rotation controller
	I1025 21:52:23.914690  440708 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-356915"
	I1025 21:52:23.914711  440708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-356915"
	I1025 21:52:23.914988  440708 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-356915 --format={{.State.Status}}
	I1025 21:52:23.979957  440708 kapi.go:59] client config for ingress-addon-legacy-356915: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt", KeyFile:"/home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.key", CAFile:"/home/jenkins/minikube-integration/17488-401064/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 21:52:23.980220  440708 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-356915"
	I1025 21:52:23.980249  440708 host.go:66] Checking if "ingress-addon-legacy-356915" exists ...
	I1025 21:52:23.980697  440708 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-356915 --format={{.State.Status}}
	I1025 21:52:23.985002  440708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:52:23.986808  440708 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:52:23.986825  440708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 21:52:23.986893  440708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-356915
	I1025 21:52:23.996235  440708 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-356915" context rescaled to 1 replicas
	I1025 21:52:23.996280  440708 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1025 21:52:24.001520  440708 out.go:177] * Verifying Kubernetes components...
	I1025 21:52:24.004378  440708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:52:24.030811  440708 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 21:52:24.030836  440708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 21:52:24.030906  440708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-356915
	I1025 21:52:24.042237  440708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/ingress-addon-legacy-356915/id_rsa Username:docker}
	I1025 21:52:24.066781  440708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/ingress-addon-legacy-356915/id_rsa Username:docker}
	I1025 21:52:24.282751  440708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 21:52:24.346694  440708 kapi.go:59] client config for ingress-addon-legacy-356915: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt", KeyFile:"/home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.key", CAFile:"/home/jenkins/minikube-integration/17488-401064/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 21:52:24.347150  440708 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-356915" to be "Ready" ...
	I1025 21:52:24.348089  440708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 21:52:24.350539  440708 node_ready.go:49] node "ingress-addon-legacy-356915" has status "Ready":"True"
	I1025 21:52:24.350606  440708 node_ready.go:38] duration metric: took 3.404334ms waiting for node "ingress-addon-legacy-356915" to be "Ready" ...
	I1025 21:52:24.350633  440708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:52:24.356852  440708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:52:24.360601  440708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-cmmgh" in "kube-system" namespace to be "Ready" ...
	I1025 21:52:24.939395  440708 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 21:52:25.012192  440708 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1025 21:52:25.013860  440708 addons.go:502] enable addons completed in 1.100293212s: enabled=[default-storageclass storage-provisioner]
	I1025 21:52:26.382269  440708 pod_ready.go:102] pod "coredns-66bff467f8-cmmgh" in "kube-system" namespace has status "Ready":"False"
	I1025 21:52:28.877298  440708 pod_ready.go:102] pod "coredns-66bff467f8-cmmgh" in "kube-system" namespace has status "Ready":"False"
	I1025 21:52:30.877619  440708 pod_ready.go:102] pod "coredns-66bff467f8-cmmgh" in "kube-system" namespace has status "Ready":"False"
	I1025 21:52:33.376732  440708 pod_ready.go:102] pod "coredns-66bff467f8-cmmgh" in "kube-system" namespace has status "Ready":"False"
	I1025 21:52:35.377413  440708 pod_ready.go:102] pod "coredns-66bff467f8-cmmgh" in "kube-system" namespace has status "Ready":"False"
	I1025 21:52:37.877518  440708 pod_ready.go:102] pod "coredns-66bff467f8-cmmgh" in "kube-system" namespace has status "Ready":"False"
	I1025 21:52:40.377381  440708 pod_ready.go:102] pod "coredns-66bff467f8-cmmgh" in "kube-system" namespace has status "Ready":"False"
	I1025 21:52:40.877997  440708 pod_ready.go:92] pod "coredns-66bff467f8-cmmgh" in "kube-system" namespace has status "Ready":"True"
	I1025 21:52:40.878026  440708 pod_ready.go:81] duration metric: took 16.517352524s waiting for pod "coredns-66bff467f8-cmmgh" in "kube-system" namespace to be "Ready" ...
	I1025 21:52:40.878056  440708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-jvhfm" in "kube-system" namespace to be "Ready" ...
	I1025 21:52:40.879927  440708 pod_ready.go:97] error getting pod "coredns-66bff467f8-jvhfm" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-jvhfm" not found
	I1025 21:52:40.879963  440708 pod_ready.go:81] duration metric: took 1.898912ms waiting for pod "coredns-66bff467f8-jvhfm" in "kube-system" namespace to be "Ready" ...
	E1025 21:52:40.879974  440708 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-jvhfm" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-jvhfm" not found
	I1025 21:52:40.879988  440708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-356915" in "kube-system" namespace to be "Ready" ...
	I1025 21:52:40.884364  440708 pod_ready.go:92] pod "etcd-ingress-addon-legacy-356915" in "kube-system" namespace has status "Ready":"True"
	I1025 21:52:40.884390  440708 pod_ready.go:81] duration metric: took 4.391846ms waiting for pod "etcd-ingress-addon-legacy-356915" in "kube-system" namespace to be "Ready" ...
	I1025 21:52:40.884403  440708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-356915" in "kube-system" namespace to be "Ready" ...
	I1025 21:52:40.889195  440708 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-356915" in "kube-system" namespace has status "Ready":"True"
	I1025 21:52:40.889219  440708 pod_ready.go:81] duration metric: took 4.808146ms waiting for pod "kube-apiserver-ingress-addon-legacy-356915" in "kube-system" namespace to be "Ready" ...
	I1025 21:52:40.889231  440708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-356915" in "kube-system" namespace to be "Ready" ...
	I1025 21:52:40.893921  440708 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-356915" in "kube-system" namespace has status "Ready":"True"
	I1025 21:52:40.893947  440708 pod_ready.go:81] duration metric: took 4.683773ms waiting for pod "kube-controller-manager-ingress-addon-legacy-356915" in "kube-system" namespace to be "Ready" ...
	I1025 21:52:40.893958  440708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bxzs8" in "kube-system" namespace to be "Ready" ...
	I1025 21:52:41.072702  440708 request.go:629] Waited for 176.248496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-356915
	I1025 21:52:41.075537  440708 pod_ready.go:92] pod "kube-proxy-bxzs8" in "kube-system" namespace has status "Ready":"True"
	I1025 21:52:41.075561  440708 pod_ready.go:81] duration metric: took 181.595394ms waiting for pod "kube-proxy-bxzs8" in "kube-system" namespace to be "Ready" ...
	I1025 21:52:41.075572  440708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-356915" in "kube-system" namespace to be "Ready" ...
	I1025 21:52:41.272970  440708 request.go:629] Waited for 197.331024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-356915
	I1025 21:52:41.473112  440708 request.go:629] Waited for 197.357436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-356915
	I1025 21:52:41.475769  440708 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-356915" in "kube-system" namespace has status "Ready":"True"
	I1025 21:52:41.475791  440708 pod_ready.go:81] duration metric: took 400.21102ms waiting for pod "kube-scheduler-ingress-addon-legacy-356915" in "kube-system" namespace to be "Ready" ...
	I1025 21:52:41.475801  440708 pod_ready.go:38] duration metric: took 17.125121953s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:52:41.475817  440708 api_server.go:52] waiting for apiserver process to appear ...
	I1025 21:52:41.475877  440708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:52:41.488838  440708 api_server.go:72] duration metric: took 17.492526545s to wait for apiserver process to appear ...
	I1025 21:52:41.488870  440708 api_server.go:88] waiting for apiserver healthz status ...
	I1025 21:52:41.488886  440708 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 21:52:41.497899  440708 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 21:52:41.498777  440708 api_server.go:141] control plane version: v1.18.20
	I1025 21:52:41.498798  440708 api_server.go:131] duration metric: took 9.921208ms to wait for apiserver health ...
	I1025 21:52:41.498806  440708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 21:52:41.673194  440708 request.go:629] Waited for 174.306221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1025 21:52:41.679815  440708 system_pods.go:59] 8 kube-system pods found
	I1025 21:52:41.679858  440708 system_pods.go:61] "coredns-66bff467f8-cmmgh" [7a67ae73-b2ab-498e-aafe-838c9611e66a] Running
	I1025 21:52:41.679866  440708 system_pods.go:61] "etcd-ingress-addon-legacy-356915" [e7c3e605-d441-4448-934f-3c272ba7e801] Running
	I1025 21:52:41.679872  440708 system_pods.go:61] "kindnet-cr9jv" [976150ec-3656-4107-aa0f-17a870ea00fc] Running
	I1025 21:52:41.679877  440708 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-356915" [78961981-8ebb-4ff4-bae8-333e4fcef7b1] Running
	I1025 21:52:41.679882  440708 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-356915" [ba114ea5-d8fd-48d6-89b8-489b9b218f72] Running
	I1025 21:52:41.679888  440708 system_pods.go:61] "kube-proxy-bxzs8" [871c98c3-7d17-499e-9fff-4cb4e9f17105] Running
	I1025 21:52:41.679893  440708 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-356915" [d3883bbc-250d-4f07-be28-1dc5f09387f9] Running
	I1025 21:52:41.679905  440708 system_pods.go:61] "storage-provisioner" [ea735b8b-39f3-477f-8832-637223181a3b] Running
	I1025 21:52:41.679913  440708 system_pods.go:74] duration metric: took 181.099219ms to wait for pod list to return data ...
	I1025 21:52:41.679924  440708 default_sa.go:34] waiting for default service account to be created ...
	I1025 21:52:41.873300  440708 request.go:629] Waited for 193.30555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1025 21:52:41.875688  440708 default_sa.go:45] found service account: "default"
	I1025 21:52:41.875712  440708 default_sa.go:55] duration metric: took 195.782393ms for default service account to be created ...
	I1025 21:52:41.875723  440708 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 21:52:42.073192  440708 request.go:629] Waited for 197.367349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1025 21:52:42.079971  440708 system_pods.go:86] 8 kube-system pods found
	I1025 21:52:42.080015  440708 system_pods.go:89] "coredns-66bff467f8-cmmgh" [7a67ae73-b2ab-498e-aafe-838c9611e66a] Running
	I1025 21:52:42.080023  440708 system_pods.go:89] "etcd-ingress-addon-legacy-356915" [e7c3e605-d441-4448-934f-3c272ba7e801] Running
	I1025 21:52:42.080029  440708 system_pods.go:89] "kindnet-cr9jv" [976150ec-3656-4107-aa0f-17a870ea00fc] Running
	I1025 21:52:42.080034  440708 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-356915" [78961981-8ebb-4ff4-bae8-333e4fcef7b1] Running
	I1025 21:52:42.080067  440708 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-356915" [ba114ea5-d8fd-48d6-89b8-489b9b218f72] Running
	I1025 21:52:42.080085  440708 system_pods.go:89] "kube-proxy-bxzs8" [871c98c3-7d17-499e-9fff-4cb4e9f17105] Running
	I1025 21:52:42.080091  440708 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-356915" [d3883bbc-250d-4f07-be28-1dc5f09387f9] Running
	I1025 21:52:42.080097  440708 system_pods.go:89] "storage-provisioner" [ea735b8b-39f3-477f-8832-637223181a3b] Running
	I1025 21:52:42.080105  440708 system_pods.go:126] duration metric: took 204.376181ms to wait for k8s-apps to be running ...
	I1025 21:52:42.080119  440708 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 21:52:42.080209  440708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:52:42.096910  440708 system_svc.go:56] duration metric: took 16.778254ms WaitForService to wait for kubelet.
	I1025 21:52:42.096937  440708 kubeadm.go:581] duration metric: took 18.100631589s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 21:52:42.096961  440708 node_conditions.go:102] verifying NodePressure condition ...
	I1025 21:52:42.273374  440708 request.go:629] Waited for 176.313598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1025 21:52:42.276470  440708 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 21:52:42.276507  440708 node_conditions.go:123] node cpu capacity is 2
	I1025 21:52:42.276521  440708 node_conditions.go:105] duration metric: took 179.554537ms to run NodePressure ...
	I1025 21:52:42.276533  440708 start.go:228] waiting for startup goroutines ...
	I1025 21:52:42.276540  440708 start.go:233] waiting for cluster config update ...
	I1025 21:52:42.276550  440708 start.go:242] writing updated cluster config ...
	I1025 21:52:42.276918  440708 ssh_runner.go:195] Run: rm -f paused
	I1025 21:52:42.346407  440708 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1025 21:52:42.348802  440708 out.go:177] 
	W1025 21:52:42.351260  440708 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1025 21:52:42.353221  440708 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1025 21:52:42.355209  440708 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-356915" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	45f211d8c11ed       97e050c3e21e9       10 seconds ago       Exited              hello-world-app           2                   dadf04777dec0       hello-world-app-5f5d8b66bb-b58xq
	b45c96cdbc380       aae348c9fbd40       35 seconds ago       Running             nginx                     0                   128e60d084307       nginx
	c3bd32fd1fb53       d7f0cba3aa5bf       48 seconds ago       Exited              controller                0                   294a6f3d5332c       ingress-nginx-controller-7fcf777cb7-jzjrj
	0953ec2853850       a883f7fc35610       53 seconds ago       Exited              patch                     0                   779241f6fd749       ingress-nginx-admission-patch-7dqmh
	ae10b9b2319e4       a883f7fc35610       53 seconds ago       Exited              create                    0                   683e6c2f5d0e0       ingress-nginx-admission-create-47vgm
	2394c71f8e5cf       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   6aa0f6cd20199       coredns-66bff467f8-cmmgh
	fdd5da20751ef       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   e3c5ac78c4e3b       storage-provisioner
	9e98fc52dec64       04b4eaa3d3db8       About a minute ago   Running             kindnet-cni               0                   b85daaabfbaef       kindnet-cr9jv
	2f25b4ccd4dba       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   2925bf5f210a1       kube-proxy-bxzs8
	f1c2589aba048       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   2f66a0cbb554d       kube-controller-manager-ingress-addon-legacy-356915
	fd1a1584fbb60       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   b5a5c901e2077       kube-apiserver-ingress-addon-legacy-356915
	5fc7aad119254       095f37015706d       About a minute ago   Running             kube-scheduler            0                   a1f35f37ca044       kube-scheduler-ingress-addon-legacy-356915
	d8da361a1a68d       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   979c5c24b9085       etcd-ingress-addon-legacy-356915
	
	* 
	* ==> containerd <==
	* Oct 25 21:53:30 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:30.027879903Z" level=info msg="RemoveContainer for \"f50f10cfd416c9e3aa2903f74cffca3bf7e601a93fb444bf73a703e83e69ca22\" returns successfully"
	Oct 25 21:53:32 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:32.579471090Z" level=info msg="StopContainer for \"c3bd32fd1fb5334a1947194fcddd788743251abc81b2d54d24e112add4c90e5c\" with timeout 2 (s)"
	Oct 25 21:53:32 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:32.579860346Z" level=info msg="Stop container \"c3bd32fd1fb5334a1947194fcddd788743251abc81b2d54d24e112add4c90e5c\" with signal terminated"
	Oct 25 21:53:32 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:32.604986092Z" level=info msg="StopContainer for \"c3bd32fd1fb5334a1947194fcddd788743251abc81b2d54d24e112add4c90e5c\" with timeout 2 (s)"
	Oct 25 21:53:32 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:32.605474383Z" level=info msg="Skipping the sending of signal terminated to container \"c3bd32fd1fb5334a1947194fcddd788743251abc81b2d54d24e112add4c90e5c\" because a prior stop with timeout>0 request already sent the signal"
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.592557869Z" level=info msg="Kill container \"c3bd32fd1fb5334a1947194fcddd788743251abc81b2d54d24e112add4c90e5c\""
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.605880030Z" level=info msg="Kill container \"c3bd32fd1fb5334a1947194fcddd788743251abc81b2d54d24e112add4c90e5c\""
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.667774836Z" level=info msg="shim disconnected" id=c3bd32fd1fb5334a1947194fcddd788743251abc81b2d54d24e112add4c90e5c
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.667837933Z" level=warning msg="cleaning up after shim disconnected" id=c3bd32fd1fb5334a1947194fcddd788743251abc81b2d54d24e112add4c90e5c namespace=k8s.io
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.667850282Z" level=info msg="cleaning up dead shim"
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.677730740Z" level=warning msg="cleanup warnings time=\"2023-10-25T21:53:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4637 runtime=io.containerd.runc.v2\n"
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.680568797Z" level=info msg="StopContainer for \"c3bd32fd1fb5334a1947194fcddd788743251abc81b2d54d24e112add4c90e5c\" returns successfully"
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.680788233Z" level=info msg="StopContainer for \"c3bd32fd1fb5334a1947194fcddd788743251abc81b2d54d24e112add4c90e5c\" returns successfully"
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.681485466Z" level=info msg="StopPodSandbox for \"294a6f3d5332cb0aabac7663dc49f86d49711f103854e9b90a10d0e72e927405\""
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.681741373Z" level=info msg="Container to stop \"c3bd32fd1fb5334a1947194fcddd788743251abc81b2d54d24e112add4c90e5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.681668225Z" level=info msg="StopPodSandbox for \"294a6f3d5332cb0aabac7663dc49f86d49711f103854e9b90a10d0e72e927405\""
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.682589022Z" level=info msg="Container to stop \"c3bd32fd1fb5334a1947194fcddd788743251abc81b2d54d24e112add4c90e5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.715979272Z" level=info msg="shim disconnected" id=294a6f3d5332cb0aabac7663dc49f86d49711f103854e9b90a10d0e72e927405
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.716053651Z" level=warning msg="cleaning up after shim disconnected" id=294a6f3d5332cb0aabac7663dc49f86d49711f103854e9b90a10d0e72e927405 namespace=k8s.io
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.716064268Z" level=info msg="cleaning up dead shim"
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.726794878Z" level=warning msg="cleanup warnings time=\"2023-10-25T21:53:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4676 runtime=io.containerd.runc.v2\n"
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.783904190Z" level=info msg="TearDown network for sandbox \"294a6f3d5332cb0aabac7663dc49f86d49711f103854e9b90a10d0e72e927405\" successfully"
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.783982639Z" level=info msg="StopPodSandbox for \"294a6f3d5332cb0aabac7663dc49f86d49711f103854e9b90a10d0e72e927405\" returns successfully"
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.785707384Z" level=info msg="TearDown network for sandbox \"294a6f3d5332cb0aabac7663dc49f86d49711f103854e9b90a10d0e72e927405\" successfully"
	Oct 25 21:53:34 ingress-addon-legacy-356915 containerd[826]: time="2023-10-25T21:53:34.785750813Z" level=info msg="StopPodSandbox for \"294a6f3d5332cb0aabac7663dc49f86d49711f103854e9b90a10d0e72e927405\" returns successfully"
	
	* 
	* ==> coredns [2394c71f8e5cfc0ec653b4a8afda49882605e47a585b8737e784c900e8f6a65e] <==
	* [INFO] 10.244.0.5:33628 - 44667 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004343s
	[INFO] 10.244.0.5:33412 - 2999 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002688961s
	[INFO] 10.244.0.5:33628 - 34844 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002016565s
	[INFO] 10.244.0.5:33628 - 50388 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001955364s
	[INFO] 10.244.0.5:33412 - 17996 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001399611s
	[INFO] 10.244.0.5:52555 - 10827 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000090666s
	[INFO] 10.244.0.5:33628 - 28781 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000102908s
	[INFO] 10.244.0.5:33412 - 12293 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000035569s
	[INFO] 10.244.0.5:52555 - 8903 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000032377s
	[INFO] 10.244.0.5:52555 - 54334 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000066732s
	[INFO] 10.244.0.5:52555 - 35428 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000082863s
	[INFO] 10.244.0.5:52555 - 6356 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055195s
	[INFO] 10.244.0.5:52555 - 24412 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052685s
	[INFO] 10.244.0.5:52555 - 56521 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002048672s
	[INFO] 10.244.0.5:52555 - 56607 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001069603s
	[INFO] 10.244.0.5:52555 - 36412 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000063368s
	[INFO] 10.244.0.5:34078 - 32514 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000070137s
	[INFO] 10.244.0.5:34078 - 29038 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000201139s
	[INFO] 10.244.0.5:34078 - 46205 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076053s
	[INFO] 10.244.0.5:34078 - 47366 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062908s
	[INFO] 10.244.0.5:34078 - 9053 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034371s
	[INFO] 10.244.0.5:34078 - 19216 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046916s
	[INFO] 10.244.0.5:34078 - 47459 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001200892s
	[INFO] 10.244.0.5:34078 - 23515 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000892242s
	[INFO] 10.244.0.5:34078 - 65297 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00003703s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-356915
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-356915
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=ingress-addon-legacy-356915
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T21_52_08_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Oct 2023 21:52:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-356915
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Oct 2023 21:53:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Oct 2023 21:53:11 +0000   Wed, 25 Oct 2023 21:51:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Oct 2023 21:53:11 +0000   Wed, 25 Oct 2023 21:51:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Oct 2023 21:53:11 +0000   Wed, 25 Oct 2023 21:51:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Oct 2023 21:53:11 +0000   Wed, 25 Oct 2023 21:52:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-356915
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 442fa3e25bf345f7b0cd93fec28e2eef
	  System UUID:                27c32387-d7e9-422a-a4fe-3b0f9e70c7e9
	  Boot ID:                    dc9d99ba-cdb2-4b53-84d7-7ab685ba34f1
	  Kernel Version:             5.15.0-1048-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.24
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-b58xq                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 coredns-66bff467f8-cmmgh                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     77s
	  kube-system                 etcd-ingress-addon-legacy-356915                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kindnet-cr9jv                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      76s
	  kube-system                 kube-apiserver-ingress-addon-legacy-356915             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-356915    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-proxy-bxzs8                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-scheduler-ingress-addon-legacy-356915             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  103s (x5 over 104s)  kubelet     Node ingress-addon-legacy-356915 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x5 over 104s)  kubelet     Node ingress-addon-legacy-356915 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x5 over 104s)  kubelet     Node ingress-addon-legacy-356915 status is now: NodeHasSufficientPID
	  Normal  Starting                 89s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  89s                  kubelet     Node ingress-addon-legacy-356915 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                  kubelet     Node ingress-addon-legacy-356915 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                  kubelet     Node ingress-addon-legacy-356915 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  89s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                79s                  kubelet     Node ingress-addon-legacy-356915 status is now: NodeReady
	  Normal  Starting                 75s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000733] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000952] FS-Cache: N-cookie d=00000000660f3c89{9p.inode} n=00000000d337f5d3
	[  +0.001074] FS-Cache: N-key=[8] 'e53a5c0100000000'
	[  +2.898689] FS-Cache: Duplicate cookie detected
	[  +0.000721] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001017] FS-Cache: O-cookie d=00000000660f3c89{9p.inode} n=00000000dd16fe58
	[  +0.001116] FS-Cache: O-key=[8] 'e43a5c0100000000'
	[  +0.000745] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000660f3c89{9p.inode} n=000000007e24298e
	[  +0.001122] FS-Cache: N-key=[8] 'e43a5c0100000000'
	[  +0.398811] FS-Cache: Duplicate cookie detected
	[  +0.000716] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001070] FS-Cache: O-cookie d=00000000660f3c89{9p.inode} n=000000003c868379
	[  +0.001081] FS-Cache: O-key=[8] 'ea3a5c0100000000'
	[  +0.000712] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000950] FS-Cache: N-cookie d=00000000660f3c89{9p.inode} n=00000000f33d2ab3
	[  +0.001092] FS-Cache: N-key=[8] 'ea3a5c0100000000'
	[  +3.995332] FS-Cache: Duplicate cookie detected
	[  +0.000761] FS-Cache: O-cookie c=00000024 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001051] FS-Cache: O-cookie d=000000002ab2478e{9P.session} n=00000000573aea2f
	[  +0.001110] FS-Cache: O-key=[10] '34323936323930373639'
	[  +0.000782] FS-Cache: N-cookie c=00000025 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=000000002ab2478e{9P.session} n=00000000bfeaf0de
	[  +0.001070] FS-Cache: N-key=[10] '34323936323930373639'
	[Oct25 21:51] new mount options do not match the existing superblock, will be ignored
	
	* 
	* ==> etcd [d8da361a1a68d2bbf987c8893d3fdf6e74ad97fa3718b3a036f42614be5a83df] <==
	* raft2023/10/25 21:51:57 INFO: aec36adc501070cc became follower at term 1
	raft2023/10/25 21:51:57 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-25 21:51:57.947678 W | auth: simple token is not cryptographically signed
	2023-10-25 21:51:57.956846 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2023/10/25 21:51:57 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-25 21:51:57.957899 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-10-25 21:51:57.958191 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-25 21:51:57.961667 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-25 21:51:57.961932 I | embed: listening for peers on 192.168.49.2:2380
	2023-10-25 21:51:57.962219 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/25 21:51:58 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/25 21:51:58 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/25 21:51:58 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/25 21:51:58 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/25 21:51:58 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-25 21:51:58.862086 I | etcdserver: published {Name:ingress-addon-legacy-356915 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-25 21:51:58.897093 I | embed: ready to serve client requests
	2023-10-25 21:51:58.906484 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-25 21:51:59.084627 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-25 21:51:59.085132 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-25 21:51:59.109104 I | embed: ready to serve client requests
	2023-10-25 21:51:59.114445 I | embed: serving client requests on 192.168.49.2:2379
	2023-10-25 21:51:59.383700 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-25 21:52:00.580303 W | etcdserver: request "ID:8128024709823978756 Method:\"PUT\" Path:\"/0/version\" Val:\"3.4.0\" " with result "" took too long (1.495055132s) to execute
	2023-10-25 21:52:08.317295 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/coredns\" " with result "range_response_count:1 size:181" took too long (109.585882ms) to execute
	
	* 
	* ==> kernel <==
	*  21:53:40 up  1:35,  0 users,  load average: 1.80, 1.99, 2.45
	Linux ingress-addon-legacy-356915 5.15.0-1048-aws #53~20.04.1-Ubuntu SMP Wed Oct 4 16:51:38 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [9e98fc52dec6462b2368ae2007036f5f02a768dc562dbe6bd40fa948e28c7312] <==
	* I1025 21:52:26.419304       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1025 21:52:26.419371       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1025 21:52:26.419549       1 main.go:116] setting mtu 1500 for CNI 
	I1025 21:52:26.419567       1 main.go:146] kindnetd IP family: "ipv4"
	I1025 21:52:26.419581       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1025 21:52:26.815287       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:52:26.815327       1 main.go:227] handling current node
	I1025 21:52:36.831545       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:52:36.831578       1 main.go:227] handling current node
	I1025 21:52:46.841940       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:52:46.842119       1 main.go:227] handling current node
	I1025 21:52:56.845456       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:52:56.845480       1 main.go:227] handling current node
	I1025 21:53:06.856062       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:53:06.856093       1 main.go:227] handling current node
	I1025 21:53:16.868261       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:53:16.868289       1 main.go:227] handling current node
	I1025 21:53:26.872146       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:53:26.872175       1 main.go:227] handling current node
	I1025 21:53:36.884159       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1025 21:53:36.884188       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [fd1a1584fbb600673c58f7f0c27f3eb6f48bc9780b6525825ddcd9b7ea230b60] <==
	* E1025 21:52:04.875545       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1025 21:52:05.041369       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 21:52:05.041615       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 21:52:05.050440       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1025 21:52:05.052456       1 cache.go:39] Caches are synced for autoregister controller
	I1025 21:52:05.059931       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1025 21:52:05.831797       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1025 21:52:05.831883       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1025 21:52:05.844636       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1025 21:52:05.848400       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1025 21:52:05.848598       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1025 21:52:06.283108       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 21:52:06.325993       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1025 21:52:06.381430       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1025 21:52:06.382820       1 controller.go:609] quota admission added evaluator for: endpoints
	I1025 21:52:06.387818       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 21:52:07.275936       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1025 21:52:08.345491       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1025 21:52:08.399861       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1025 21:52:11.571482       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 21:52:23.554744       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1025 21:52:24.082828       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1025 21:52:43.245713       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1025 21:53:02.670965       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1025 21:53:31.669833       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0x400d003c08), encoder:(*versioning.codec)(0x400ab93360), buf:(*bytes.Buffer)(0x4005938b40)})
	
	* 
	* ==> kube-controller-manager [f1c2589aba0482fb21055f7161170cdf819557d4c9e65b3b983461fc96a7dd80] <==
	* I1025 21:52:23.655984       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"0c085a43-d03e-45b4-a9c2-42474ffcb593", APIVersion:"apps/v1", ResourceVersion:"316", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-cmmgh
	I1025 21:52:23.656528       1 shared_informer.go:230] Caches are synced for service account 
	I1025 21:52:23.799613       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1025 21:52:23.821671       1 shared_informer.go:230] Caches are synced for attach detach 
	I1025 21:52:23.998036       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"53b0a84d-353b-408d-9c69-502b03e0c1ed", APIVersion:"apps/v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1025 21:52:24.048633       1 shared_informer.go:230] Caches are synced for resource quota 
	I1025 21:52:24.050866       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1025 21:52:24.084922       1 shared_informer.go:230] Caches are synced for resource quota 
	I1025 21:52:24.086847       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"0c085a43-d03e-45b4-a9c2-42474ffcb593", APIVersion:"apps/v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-jvhfm
	I1025 21:52:24.100606       1 shared_informer.go:230] Caches are synced for stateful set 
	I1025 21:52:24.155473       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1025 21:52:24.155500       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1025 21:52:24.179773       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"5ae8c1e6-f3a2-4a4b-9156-ea2285538c9e", APIVersion:"apps/v1", ResourceVersion:"222", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-bxzs8
	I1025 21:52:24.182704       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1025 21:52:24.224004       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"903f1fb2-b8d2-4478-9c9b-324f9da3e275", APIVersion:"apps/v1", ResourceVersion:"235", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-cr9jv
	E1025 21:52:24.256311       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"5ae8c1e6-f3a2-4a4b-9156-ea2285538c9e", ResourceVersion:"222", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63833867528, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000c0e880), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4000c0e8a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000c0e8c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40003a4780), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4000c0e8e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000c0e900), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000c0e940)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40000bb9a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000a06498), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40009b5260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000fb60)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000a064f8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1025 21:52:43.239336       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a2d02313-cab9-400c-8cd7-d5d4892bd001", APIVersion:"apps/v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1025 21:52:43.251121       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"56e05366-8fad-4bdb-9537-e4a727e4093d", APIVersion:"apps/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-jzjrj
	I1025 21:52:43.279126       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"e9e424c8-1482-470a-af87-426b68f0b5e1", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-47vgm
	I1025 21:52:43.337007       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6384990d-e11c-407b-8c2b-5d0b0c50f408", APIVersion:"batch/v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-7dqmh
	I1025 21:52:47.820477       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6384990d-e11c-407b-8c2b-5d0b0c50f408", APIVersion:"batch/v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1025 21:52:47.847196       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"e9e424c8-1482-470a-af87-426b68f0b5e1", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1025 21:53:12.471336       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"421ff43b-b571-4456-93a0-bd67b6437e7e", APIVersion:"apps/v1", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1025 21:53:12.482747       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"90f7738e-8fa9-41d4-97cf-d92d69a2f3fa", APIVersion:"apps/v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-b58xq
	E1025 21:53:37.270447       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-fvsk7" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [2f25b4ccd4dba610a20f9fa32b5a3447de415956ec573be8e12b22bb51cfad62] <==
	* W1025 21:52:25.081504       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1025 21:52:25.093628       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1025 21:52:25.093739       1 server_others.go:186] Using iptables Proxier.
	I1025 21:52:25.094276       1 server.go:583] Version: v1.18.20
	I1025 21:52:25.102372       1 config.go:133] Starting endpoints config controller
	I1025 21:52:25.102480       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1025 21:52:25.102572       1 config.go:315] Starting service config controller
	I1025 21:52:25.102606       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1025 21:52:25.202700       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1025 21:52:25.202860       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [5fc7aad119254f14dd9b5229f5b68b0f8e5e7c9823c2268f870a9c78b294afb3] <==
	* W1025 21:52:05.016622       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 21:52:05.049925       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1025 21:52:05.050022       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1025 21:52:05.052326       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1025 21:52:05.052618       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 21:52:05.052704       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 21:52:05.054578       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1025 21:52:05.055940       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 21:52:05.058248       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 21:52:05.065714       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 21:52:05.065906       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 21:52:05.066044       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 21:52:05.065982       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 21:52:05.066147       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 21:52:05.066216       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 21:52:05.066444       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 21:52:05.066663       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 21:52:05.066875       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 21:52:05.067114       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 21:52:05.958035       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 21:52:05.984518       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 21:52:05.986643       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 21:52:06.137891       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 21:52:06.296131       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1025 21:52:09.352933       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Oct 25 21:53:15 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:15.951727    1681 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7a6c6d3d2605eb848b43169fea88d3ccc47faaee3367989d816e5aee74ebc62a
	Oct 25 21:53:15 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:15.952079    1681 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8149a80ea5e2e401c1bfdda5cdfd8d1a38b7ecd4770add3456e5db5e788487c2
	Oct 25 21:53:15 ingress-addon-legacy-356915 kubelet[1681]: E1025 21:53:15.952319    1681 pod_workers.go:191] Error syncing pod 29e0cd1b-97a2-4c23-b195-811922995a94 ("hello-world-app-5f5d8b66bb-b58xq_default(29e0cd1b-97a2-4c23-b195-811922995a94)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-b58xq_default(29e0cd1b-97a2-4c23-b195-811922995a94)"
	Oct 25 21:53:16 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:16.955213    1681 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8149a80ea5e2e401c1bfdda5cdfd8d1a38b7ecd4770add3456e5db5e788487c2
	Oct 25 21:53:16 ingress-addon-legacy-356915 kubelet[1681]: E1025 21:53:16.955462    1681 pod_workers.go:191] Error syncing pod 29e0cd1b-97a2-4c23-b195-811922995a94 ("hello-world-app-5f5d8b66bb-b58xq_default(29e0cd1b-97a2-4c23-b195-811922995a94)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-b58xq_default(29e0cd1b-97a2-4c23-b195-811922995a94)"
	Oct 25 21:53:25 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:25.655976    1681 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f50f10cfd416c9e3aa2903f74cffca3bf7e601a93fb444bf73a703e83e69ca22
	Oct 25 21:53:25 ingress-addon-legacy-356915 kubelet[1681]: E1025 21:53:25.656766    1681 pod_workers.go:191] Error syncing pod 9cae7eef-7c9e-4585-9c80-39b65f141379 ("kube-ingress-dns-minikube_kube-system(9cae7eef-7c9e-4585-9c80-39b65f141379)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(9cae7eef-7c9e-4585-9c80-39b65f141379)"
	Oct 25 21:53:28 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:28.472863    1681 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-sf4kq" (UniqueName: "kubernetes.io/secret/9cae7eef-7c9e-4585-9c80-39b65f141379-minikube-ingress-dns-token-sf4kq") pod "9cae7eef-7c9e-4585-9c80-39b65f141379" (UID: "9cae7eef-7c9e-4585-9c80-39b65f141379")
	Oct 25 21:53:28 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:28.477231    1681 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cae7eef-7c9e-4585-9c80-39b65f141379-minikube-ingress-dns-token-sf4kq" (OuterVolumeSpecName: "minikube-ingress-dns-token-sf4kq") pod "9cae7eef-7c9e-4585-9c80-39b65f141379" (UID: "9cae7eef-7c9e-4585-9c80-39b65f141379"). InnerVolumeSpecName "minikube-ingress-dns-token-sf4kq". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 25 21:53:28 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:28.573295    1681 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-sf4kq" (UniqueName: "kubernetes.io/secret/9cae7eef-7c9e-4585-9c80-39b65f141379-minikube-ingress-dns-token-sf4kq") on node "ingress-addon-legacy-356915" DevicePath ""
	Oct 25 21:53:29 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:29.655992    1681 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8149a80ea5e2e401c1bfdda5cdfd8d1a38b7ecd4770add3456e5db5e788487c2
	Oct 25 21:53:29 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:29.980548    1681 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8149a80ea5e2e401c1bfdda5cdfd8d1a38b7ecd4770add3456e5db5e788487c2
	Oct 25 21:53:29 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:29.980901    1681 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 45f211d8c11ed7e57d9e3ec2eb5d9847c317d5e9d94f5bbc96077f884c71e5f2
	Oct 25 21:53:29 ingress-addon-legacy-356915 kubelet[1681]: E1025 21:53:29.981216    1681 pod_workers.go:191] Error syncing pod 29e0cd1b-97a2-4c23-b195-811922995a94 ("hello-world-app-5f5d8b66bb-b58xq_default(29e0cd1b-97a2-4c23-b195-811922995a94)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-b58xq_default(29e0cd1b-97a2-4c23-b195-811922995a94)"
	Oct 25 21:53:30 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:30.016868    1681 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f50f10cfd416c9e3aa2903f74cffca3bf7e601a93fb444bf73a703e83e69ca22
	Oct 25 21:53:32 ingress-addon-legacy-356915 kubelet[1681]: E1025 21:53:32.583058    1681 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-jzjrj.1791784e64304433", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-jzjrj", UID:"f089ecd8-3f76-48a6-8361-ee56e8825fcb", APIVersion:"v1", ResourceVersion:"478", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-356915"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14681772283ac33, ext:84418268461, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14681772283ac33, ext:84418268461, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-jzjrj.1791784e64304433" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 25 21:53:32 ingress-addon-legacy-356915 kubelet[1681]: E1025 21:53:32.613522    1681 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-jzjrj.1791784e64304433", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-jzjrj", UID:"f089ecd8-3f76-48a6-8361-ee56e8825fcb", APIVersion:"v1", ResourceVersion:"478", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-356915"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14681772283ac33, ext:84418268461, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14681772409fbbd, ext:84443847872, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-jzjrj.1791784e64304433" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 25 21:53:35 ingress-addon-legacy-356915 kubelet[1681]: W1025 21:53:35.018501    1681 pod_container_deletor.go:77] Container "294a6f3d5332cb0aabac7663dc49f86d49711f103854e9b90a10d0e72e927405" not found in pod's containers
	Oct 25 21:53:36 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:36.715980    1681 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-wb6mv" (UniqueName: "kubernetes.io/secret/f089ecd8-3f76-48a6-8361-ee56e8825fcb-ingress-nginx-token-wb6mv") pod "f089ecd8-3f76-48a6-8361-ee56e8825fcb" (UID: "f089ecd8-3f76-48a6-8361-ee56e8825fcb")
	Oct 25 21:53:36 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:36.716068    1681 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f089ecd8-3f76-48a6-8361-ee56e8825fcb-webhook-cert") pod "f089ecd8-3f76-48a6-8361-ee56e8825fcb" (UID: "f089ecd8-3f76-48a6-8361-ee56e8825fcb")
	Oct 25 21:53:36 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:36.723075    1681 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f089ecd8-3f76-48a6-8361-ee56e8825fcb-ingress-nginx-token-wb6mv" (OuterVolumeSpecName: "ingress-nginx-token-wb6mv") pod "f089ecd8-3f76-48a6-8361-ee56e8825fcb" (UID: "f089ecd8-3f76-48a6-8361-ee56e8825fcb"). InnerVolumeSpecName "ingress-nginx-token-wb6mv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 25 21:53:36 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:36.724796    1681 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f089ecd8-3f76-48a6-8361-ee56e8825fcb-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f089ecd8-3f76-48a6-8361-ee56e8825fcb" (UID: "f089ecd8-3f76-48a6-8361-ee56e8825fcb"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 25 21:53:36 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:36.816420    1681 reconciler.go:319] Volume detached for volume "ingress-nginx-token-wb6mv" (UniqueName: "kubernetes.io/secret/f089ecd8-3f76-48a6-8361-ee56e8825fcb-ingress-nginx-token-wb6mv") on node "ingress-addon-legacy-356915" DevicePath ""
	Oct 25 21:53:36 ingress-addon-legacy-356915 kubelet[1681]: I1025 21:53:36.816472    1681 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f089ecd8-3f76-48a6-8361-ee56e8825fcb-webhook-cert") on node "ingress-addon-legacy-356915" DevicePath ""
	Oct 25 21:53:37 ingress-addon-legacy-356915 kubelet[1681]: W1025 21:53:37.661492    1681 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/f089ecd8-3f76-48a6-8361-ee56e8825fcb/volumes" does not exist
	
	* 
	* ==> storage-provisioner [fdd5da20751ef2a9207531854cb2c3f7b8fc89db25ca245a9c01afaa7cde14e1] <==
	* I1025 21:52:27.484263       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 21:52:27.497750       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 21:52:27.498006       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 21:52:27.505015       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 21:52:27.505482       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-356915_c7a201a5-3e89-4201-a0c6-fcc342bf661c!
	I1025 21:52:27.506628       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6f82e041-5616-45bb-8563-7fa98861b42d", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-356915_c7a201a5-3e89-4201-a0c6-fcc342bf661c became leader
	I1025 21:52:27.606342       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-356915_c7a201a5-3e89-4201-a0c6-fcc342bf661c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-356915 -n ingress-addon-legacy-356915
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-356915 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (48.14s)

                                                
                                    
x
+
TestScheduledStopUnix (38.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-460131 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-460131 --memory=2048 --driver=docker  --container-runtime=containerd: (33.719661916s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-460131 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-460131 -n scheduled-stop-460131
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-460131 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 511218 running but should have been killed on reschedule of stop
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-10-25 22:10:10.744598695 +0000 UTC m=+1763.961754458
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-460131
helpers_test.go:235: (dbg) docker inspect scheduled-stop-460131:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "642032faed8d674c151fe6dcc841f66545020a28be97269233a2cd925072f313",
	        "Created": "2023-10-25T22:09:42.044035401Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 509532,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-25T22:09:42.42572531Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5b0caed01db498fc255865f87f2d678d2b2e04ba0f7d056894d23da26cbc249a",
	        "ResolvConfPath": "/var/lib/docker/containers/642032faed8d674c151fe6dcc841f66545020a28be97269233a2cd925072f313/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/642032faed8d674c151fe6dcc841f66545020a28be97269233a2cd925072f313/hostname",
	        "HostsPath": "/var/lib/docker/containers/642032faed8d674c151fe6dcc841f66545020a28be97269233a2cd925072f313/hosts",
	        "LogPath": "/var/lib/docker/containers/642032faed8d674c151fe6dcc841f66545020a28be97269233a2cd925072f313/642032faed8d674c151fe6dcc841f66545020a28be97269233a2cd925072f313-json.log",
	        "Name": "/scheduled-stop-460131",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-460131:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-460131",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8d05d12ce8e3b750984564fb90c0ad9e13fea2c736c5da5d6975ec685d8f7ae3-init/diff:/var/lib/docker/overlay2/72a373cc1a648bd482c91a7d51c6d15fd52c6262ee2446bc4493d43e0c8c95ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8d05d12ce8e3b750984564fb90c0ad9e13fea2c736c5da5d6975ec685d8f7ae3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8d05d12ce8e3b750984564fb90c0ad9e13fea2c736c5da5d6975ec685d8f7ae3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8d05d12ce8e3b750984564fb90c0ad9e13fea2c736c5da5d6975ec685d8f7ae3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-460131",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-460131/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-460131",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-460131",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-460131",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "15f2ec3eab7af40a5401c853402f9354c88f5da7681618509f77f94414d30389",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33243"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33242"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33239"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33241"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33240"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/15f2ec3eab7a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-460131": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "642032faed8d",
	                        "scheduled-stop-460131"
	                    ],
	                    "NetworkID": "3d24544c168ea9a0acc3fb259b0ccc8cf97337ac96c7a2c03d22e9e99fc5f3f0",
	                    "EndpointID": "fd64edf366472e1aa4fd98ef148cd4aefb5a8d149c37b88a9d4ec1488efc95b7",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-460131 -n scheduled-stop-460131
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-460131 logs -n 25
E1025 22:10:12.166053  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-460131 logs -n 25: (1.204148714s)
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| stop    | -p multinode-318283            | multinode-318283      | jenkins | v1.31.2 | 25 Oct 23 22:02 UTC | 25 Oct 23 22:03 UTC |
	| start   | -p multinode-318283            | multinode-318283      | jenkins | v1.31.2 | 25 Oct 23 22:03 UTC | 25 Oct 23 22:04 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	| node    | list -p multinode-318283       | multinode-318283      | jenkins | v1.31.2 | 25 Oct 23 22:04 UTC |                     |
	| node    | multinode-318283 node delete   | multinode-318283      | jenkins | v1.31.2 | 25 Oct 23 22:04 UTC | 25 Oct 23 22:04 UTC |
	|         | m03                            |                       |         |         |                     |                     |
	| stop    | multinode-318283 stop          | multinode-318283      | jenkins | v1.31.2 | 25 Oct 23 22:04 UTC | 25 Oct 23 22:05 UTC |
	| start   | -p multinode-318283            | multinode-318283      | jenkins | v1.31.2 | 25 Oct 23 22:05 UTC | 25 Oct 23 22:06 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| node    | list -p multinode-318283       | multinode-318283      | jenkins | v1.31.2 | 25 Oct 23 22:06 UTC |                     |
	| start   | -p multinode-318283-m02        | multinode-318283-m02  | jenkins | v1.31.2 | 25 Oct 23 22:06 UTC |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| start   | -p multinode-318283-m03        | multinode-318283-m03  | jenkins | v1.31.2 | 25 Oct 23 22:06 UTC | 25 Oct 23 22:07 UTC |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| node    | add -p multinode-318283        | multinode-318283      | jenkins | v1.31.2 | 25 Oct 23 22:07 UTC |                     |
	| delete  | -p multinode-318283-m03        | multinode-318283-m03  | jenkins | v1.31.2 | 25 Oct 23 22:07 UTC | 25 Oct 23 22:07 UTC |
	| delete  | -p multinode-318283            | multinode-318283      | jenkins | v1.31.2 | 25 Oct 23 22:07 UTC | 25 Oct 23 22:07 UTC |
	| start   | -p test-preload-851812         | test-preload-851812   | jenkins | v1.31.2 | 25 Oct 23 22:07 UTC | 25 Oct 23 22:08 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --wait=true --preload=false    |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                       |         |         |                     |                     |
	| image   | test-preload-851812 image pull | test-preload-851812   | jenkins | v1.31.2 | 25 Oct 23 22:08 UTC | 25 Oct 23 22:08 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                       |         |         |                     |                     |
	| stop    | -p test-preload-851812         | test-preload-851812   | jenkins | v1.31.2 | 25 Oct 23 22:08 UTC | 25 Oct 23 22:08 UTC |
	| start   | -p test-preload-851812         | test-preload-851812   | jenkins | v1.31.2 | 25 Oct 23 22:08 UTC | 25 Oct 23 22:09 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                       |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| image   | test-preload-851812 image list | test-preload-851812   | jenkins | v1.31.2 | 25 Oct 23 22:09 UTC | 25 Oct 23 22:09 UTC |
	| delete  | -p test-preload-851812         | test-preload-851812   | jenkins | v1.31.2 | 25 Oct 23 22:09 UTC | 25 Oct 23 22:09 UTC |
	| start   | -p scheduled-stop-460131       | scheduled-stop-460131 | jenkins | v1.31.2 | 25 Oct 23 22:09 UTC | 25 Oct 23 22:10 UTC |
	|         | --memory=2048 --driver=docker  |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-460131       | scheduled-stop-460131 | jenkins | v1.31.2 | 25 Oct 23 22:10 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-460131       | scheduled-stop-460131 | jenkins | v1.31.2 | 25 Oct 23 22:10 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-460131       | scheduled-stop-460131 | jenkins | v1.31.2 | 25 Oct 23 22:10 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-460131       | scheduled-stop-460131 | jenkins | v1.31.2 | 25 Oct 23 22:10 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-460131       | scheduled-stop-460131 | jenkins | v1.31.2 | 25 Oct 23 22:10 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-460131       | scheduled-stop-460131 | jenkins | v1.31.2 | 25 Oct 23 22:10 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 22:09:36
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 22:09:36.485053  509076 out.go:296] Setting OutFile to fd 1 ...
	I1025 22:09:36.485238  509076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:09:36.485242  509076 out.go:309] Setting ErrFile to fd 2...
	I1025 22:09:36.485247  509076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:09:36.485510  509076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
	I1025 22:09:36.485887  509076 out.go:303] Setting JSON to false
	I1025 22:09:36.487128  509076 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6714,"bootTime":1698265063,"procs":464,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 22:09:36.487197  509076 start.go:138] virtualization:  
	I1025 22:09:36.489828  509076 out.go:177] * [scheduled-stop-460131] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1025 22:09:36.491987  509076 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 22:09:36.493723  509076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 22:09:36.492118  509076 notify.go:220] Checking for updates...
	I1025 22:09:36.497319  509076 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	I1025 22:09:36.499360  509076 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	I1025 22:09:36.501137  509076 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 22:09:36.502740  509076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 22:09:36.504552  509076 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 22:09:36.531114  509076 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 22:09:36.531233  509076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 22:09:36.620383  509076 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-25 22:09:36.609807885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 22:09:36.620472  509076 docker.go:295] overlay module found
	I1025 22:09:36.622377  509076 out.go:177] * Using the docker driver based on user configuration
	I1025 22:09:36.624396  509076 start.go:298] selected driver: docker
	I1025 22:09:36.624405  509076 start.go:902] validating driver "docker" against <nil>
	I1025 22:09:36.624418  509076 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 22:09:36.625050  509076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 22:09:36.708698  509076 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-25 22:09:36.698900825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 22:09:36.708853  509076 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 22:09:36.709093  509076 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 22:09:36.711188  509076 out.go:177] * Using Docker driver with root privileges
	I1025 22:09:36.712884  509076 cni.go:84] Creating CNI manager for ""
	I1025 22:09:36.712895  509076 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1025 22:09:36.712907  509076 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 22:09:36.712917  509076 start_flags.go:323] config:
	{Name:scheduled-stop-460131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:scheduled-stop-460131 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 22:09:36.715204  509076 out.go:177] * Starting control plane node scheduled-stop-460131 in cluster scheduled-stop-460131
	I1025 22:09:36.717045  509076 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1025 22:09:36.718671  509076 out.go:177] * Pulling base image ...
	I1025 22:09:36.720232  509076 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1025 22:09:36.720283  509076 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4
	I1025 22:09:36.720308  509076 cache.go:56] Caching tarball of preloaded images
	I1025 22:09:36.720319  509076 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 22:09:36.720397  509076 preload.go:174] Found /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 22:09:36.720406  509076 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on containerd
	I1025 22:09:36.720753  509076 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/config.json ...
	I1025 22:09:36.720772  509076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/config.json: {Name:mkb1b4b187b0c0c04529532ebc2137cdaced4ab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:09:36.738676  509076 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 22:09:36.738691  509076 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 22:09:36.738705  509076 cache.go:194] Successfully downloaded all kic artifacts
	I1025 22:09:36.738761  509076 start.go:365] acquiring machines lock for scheduled-stop-460131: {Name:mk7f3127aa971ee14f981301eb1f478043ed87dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:09:36.738875  509076 start.go:369] acquired machines lock for "scheduled-stop-460131" in 98.387µs
	I1025 22:09:36.738901  509076 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-460131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:scheduled-stop-460131 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1025 22:09:36.739016  509076 start.go:125] createHost starting for "" (driver="docker")
	I1025 22:09:36.741271  509076 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 22:09:36.741580  509076 start.go:159] libmachine.API.Create for "scheduled-stop-460131" (driver="docker")
	I1025 22:09:36.741608  509076 client.go:168] LocalClient.Create starting
	I1025 22:09:36.741701  509076 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem
	I1025 22:09:36.741736  509076 main.go:141] libmachine: Decoding PEM data...
	I1025 22:09:36.741757  509076 main.go:141] libmachine: Parsing certificate...
	I1025 22:09:36.741828  509076 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17488-401064/.minikube/certs/cert.pem
	I1025 22:09:36.741846  509076 main.go:141] libmachine: Decoding PEM data...
	I1025 22:09:36.741855  509076 main.go:141] libmachine: Parsing certificate...
	I1025 22:09:36.742250  509076 cli_runner.go:164] Run: docker network inspect scheduled-stop-460131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 22:09:36.760300  509076 cli_runner.go:211] docker network inspect scheduled-stop-460131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 22:09:36.760381  509076 network_create.go:281] running [docker network inspect scheduled-stop-460131] to gather additional debugging logs...
	I1025 22:09:36.760397  509076 cli_runner.go:164] Run: docker network inspect scheduled-stop-460131
	W1025 22:09:36.779849  509076 cli_runner.go:211] docker network inspect scheduled-stop-460131 returned with exit code 1
	I1025 22:09:36.779870  509076 network_create.go:284] error running [docker network inspect scheduled-stop-460131]: docker network inspect scheduled-stop-460131: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-460131 not found
	I1025 22:09:36.779883  509076 network_create.go:286] output of [docker network inspect scheduled-stop-460131]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-460131 not found
	
	** /stderr **
	I1025 22:09:36.779991  509076 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 22:09:36.798550  509076 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e7bb78699cfd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:b7:30:de:2e} reservation:<nil>}
	I1025 22:09:36.798865  509076 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b991252f7c15 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:3c:f2:47:97} reservation:<nil>}
	I1025 22:09:36.799234  509076 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025d64a0}
	I1025 22:09:36.799250  509076 network_create.go:124] attempt to create docker network scheduled-stop-460131 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 22:09:36.799314  509076 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-460131 scheduled-stop-460131
	I1025 22:09:36.870108  509076 network_create.go:108] docker network scheduled-stop-460131 192.168.67.0/24 created
	I1025 22:09:36.870142  509076 kic.go:118] calculated static IP "192.168.67.2" for the "scheduled-stop-460131" container
	I1025 22:09:36.870212  509076 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 22:09:36.887393  509076 cli_runner.go:164] Run: docker volume create scheduled-stop-460131 --label name.minikube.sigs.k8s.io=scheduled-stop-460131 --label created_by.minikube.sigs.k8s.io=true
	I1025 22:09:36.911646  509076 oci.go:103] Successfully created a docker volume scheduled-stop-460131
	I1025 22:09:36.911732  509076 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-460131-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-460131 --entrypoint /usr/bin/test -v scheduled-stop-460131:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 22:09:37.532778  509076 oci.go:107] Successfully prepared a docker volume scheduled-stop-460131
	I1025 22:09:37.532821  509076 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1025 22:09:37.532840  509076 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 22:09:37.532923  509076 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-460131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 22:09:41.945239  509076 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-460131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (4.412278769s)
	I1025 22:09:41.945261  509076 kic.go:200] duration metric: took 4.412417 seconds to extract preloaded images to volume
	W1025 22:09:41.945413  509076 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 22:09:41.945529  509076 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 22:09:42.025824  509076 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-460131 --name scheduled-stop-460131 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-460131 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-460131 --network scheduled-stop-460131 --ip 192.168.67.2 --volume scheduled-stop-460131:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 22:09:42.435800  509076 cli_runner.go:164] Run: docker container inspect scheduled-stop-460131 --format={{.State.Running}}
	I1025 22:09:42.463073  509076 cli_runner.go:164] Run: docker container inspect scheduled-stop-460131 --format={{.State.Status}}
	I1025 22:09:42.490022  509076 cli_runner.go:164] Run: docker exec scheduled-stop-460131 stat /var/lib/dpkg/alternatives/iptables
	I1025 22:09:42.589218  509076 oci.go:144] the created container "scheduled-stop-460131" has a running status.
	I1025 22:09:42.589236  509076 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17488-401064/.minikube/machines/scheduled-stop-460131/id_rsa...
	I1025 22:09:42.862725  509076 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17488-401064/.minikube/machines/scheduled-stop-460131/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 22:09:42.891256  509076 cli_runner.go:164] Run: docker container inspect scheduled-stop-460131 --format={{.State.Status}}
	I1025 22:09:42.914617  509076 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 22:09:42.914628  509076 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-460131 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 22:09:43.024852  509076 cli_runner.go:164] Run: docker container inspect scheduled-stop-460131 --format={{.State.Status}}
	I1025 22:09:43.051571  509076 machine.go:88] provisioning docker machine ...
	I1025 22:09:43.051594  509076 ubuntu.go:169] provisioning hostname "scheduled-stop-460131"
	I1025 22:09:43.051659  509076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-460131
	I1025 22:09:43.087295  509076 main.go:141] libmachine: Using SSH client type: native
	I1025 22:09:43.087714  509076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 33243 <nil> <nil>}
	I1025 22:09:43.087724  509076 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-460131 && echo "scheduled-stop-460131" | sudo tee /etc/hostname
	I1025 22:09:43.088319  509076 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 22:09:46.244615  509076 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-460131
	
	I1025 22:09:46.244696  509076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-460131
	I1025 22:09:46.267784  509076 main.go:141] libmachine: Using SSH client type: native
	I1025 22:09:46.268214  509076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 33243 <nil> <nil>}
	I1025 22:09:46.268229  509076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-460131' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-460131/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-460131' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 22:09:46.406462  509076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 22:09:46.406477  509076 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17488-401064/.minikube CaCertPath:/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17488-401064/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17488-401064/.minikube}
	I1025 22:09:46.406508  509076 ubuntu.go:177] setting up certificates
	I1025 22:09:46.406516  509076 provision.go:83] configureAuth start
	I1025 22:09:46.406577  509076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-460131
	I1025 22:09:46.424837  509076 provision.go:138] copyHostCerts
	I1025 22:09:46.424893  509076 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-401064/.minikube/ca.pem, removing ...
	I1025 22:09:46.424900  509076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-401064/.minikube/ca.pem
	I1025 22:09:46.424979  509076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17488-401064/.minikube/ca.pem (1082 bytes)
	I1025 22:09:46.425145  509076 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-401064/.minikube/cert.pem, removing ...
	I1025 22:09:46.425150  509076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-401064/.minikube/cert.pem
	I1025 22:09:46.425181  509076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17488-401064/.minikube/cert.pem (1123 bytes)
	I1025 22:09:46.425276  509076 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-401064/.minikube/key.pem, removing ...
	I1025 22:09:46.425279  509076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-401064/.minikube/key.pem
	I1025 22:09:46.425307  509076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17488-401064/.minikube/key.pem (1675 bytes)
	I1025 22:09:46.425346  509076 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17488-401064/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-460131 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube scheduled-stop-460131]
	I1025 22:09:46.839404  509076 provision.go:172] copyRemoteCerts
	I1025 22:09:46.839477  509076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 22:09:46.839522  509076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-460131
	I1025 22:09:46.857762  509076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/scheduled-stop-460131/id_rsa Username:docker}
	I1025 22:09:46.960533  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 22:09:46.990799  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1025 22:09:47.021358  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 22:09:47.049675  509076 provision.go:86] duration metric: configureAuth took 643.143346ms
	I1025 22:09:47.049693  509076 ubuntu.go:193] setting minikube options for container-runtime
	I1025 22:09:47.049891  509076 config.go:182] Loaded profile config "scheduled-stop-460131": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1025 22:09:47.049897  509076 machine.go:91] provisioned docker machine in 3.9983163s
	I1025 22:09:47.049902  509076 client.go:171] LocalClient.Create took 10.308289703s
	I1025 22:09:47.049915  509076 start.go:167] duration metric: libmachine.API.Create for "scheduled-stop-460131" took 10.308334101s
	I1025 22:09:47.049921  509076 start.go:300] post-start starting for "scheduled-stop-460131" (driver="docker")
	I1025 22:09:47.049930  509076 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 22:09:47.049984  509076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 22:09:47.050021  509076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-460131
	I1025 22:09:47.068400  509076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/scheduled-stop-460131/id_rsa Username:docker}
	I1025 22:09:47.168417  509076 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 22:09:47.172667  509076 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 22:09:47.172693  509076 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 22:09:47.172703  509076 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 22:09:47.172709  509076 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 22:09:47.172718  509076 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-401064/.minikube/addons for local assets ...
	I1025 22:09:47.172777  509076 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-401064/.minikube/files for local assets ...
	I1025 22:09:47.172860  509076 filesync.go:149] local asset: /home/jenkins/minikube-integration/17488-401064/.minikube/files/etc/ssl/certs/4064532.pem -> 4064532.pem in /etc/ssl/certs
	I1025 22:09:47.173048  509076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 22:09:47.183783  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/files/etc/ssl/certs/4064532.pem --> /etc/ssl/certs/4064532.pem (1708 bytes)
	I1025 22:09:47.215366  509076 start.go:303] post-start completed in 165.430127ms
	I1025 22:09:47.215757  509076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-460131
	I1025 22:09:47.237100  509076 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/config.json ...
	I1025 22:09:47.237378  509076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 22:09:47.237424  509076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-460131
	I1025 22:09:47.255893  509076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/scheduled-stop-460131/id_rsa Username:docker}
	I1025 22:09:47.355802  509076 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 22:09:47.362083  509076 start.go:128] duration metric: createHost completed in 10.623052932s
	I1025 22:09:47.362097  509076 start.go:83] releasing machines lock for "scheduled-stop-460131", held for 10.623214974s
	I1025 22:09:47.362170  509076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-460131
	I1025 22:09:47.381613  509076 ssh_runner.go:195] Run: cat /version.json
	I1025 22:09:47.381681  509076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 22:09:47.381733  509076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-460131
	I1025 22:09:47.381746  509076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-460131
	I1025 22:09:47.405544  509076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/scheduled-stop-460131/id_rsa Username:docker}
	I1025 22:09:47.407400  509076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/scheduled-stop-460131/id_rsa Username:docker}
	I1025 22:09:47.637336  509076 ssh_runner.go:195] Run: systemctl --version
	I1025 22:09:47.642986  509076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 22:09:47.648658  509076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 22:09:47.679245  509076 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 22:09:47.679322  509076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 22:09:47.714480  509076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1025 22:09:47.714492  509076 start.go:472] detecting cgroup driver to use...
	I1025 22:09:47.714523  509076 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 22:09:47.714571  509076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 22:09:47.729500  509076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 22:09:47.743498  509076 docker.go:198] disabling cri-docker service (if available) ...
	I1025 22:09:47.743552  509076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 22:09:47.760182  509076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 22:09:47.777697  509076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 22:09:47.870704  509076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 22:09:47.989291  509076 docker.go:214] disabling docker service ...
	I1025 22:09:47.989349  509076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 22:09:48.016493  509076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 22:09:48.033811  509076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 22:09:48.141560  509076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 22:09:48.239481  509076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 22:09:48.253990  509076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 22:09:48.275933  509076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 22:09:48.289117  509076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 22:09:48.302769  509076 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 22:09:48.302848  509076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 22:09:48.316303  509076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 22:09:48.329479  509076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 22:09:48.342693  509076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 22:09:48.355703  509076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 22:09:48.368154  509076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 22:09:48.380819  509076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 22:09:48.391681  509076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 22:09:48.405885  509076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:09:48.492868  509076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 22:09:48.632408  509076 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1025 22:09:48.632468  509076 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1025 22:09:48.637878  509076 start.go:540] Will wait 60s for crictl version
	I1025 22:09:48.637941  509076 ssh_runner.go:195] Run: which crictl
	I1025 22:09:48.642715  509076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 22:09:48.689195  509076 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1025 22:09:48.689261  509076 ssh_runner.go:195] Run: containerd --version
	I1025 22:09:48.721592  509076 ssh_runner.go:195] Run: containerd --version
	I1025 22:09:48.752608  509076 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.6.24 ...
	I1025 22:09:48.754484  509076 cli_runner.go:164] Run: docker network inspect scheduled-stop-460131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 22:09:48.772920  509076 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1025 22:09:48.777892  509076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:09:48.791863  509076 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1025 22:09:48.791922  509076 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:09:48.833105  509076 containerd.go:604] all images are preloaded for containerd runtime.
	I1025 22:09:48.833118  509076 containerd.go:518] Images already preloaded, skipping extraction
	I1025 22:09:48.833180  509076 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:09:48.874217  509076 containerd.go:604] all images are preloaded for containerd runtime.
	I1025 22:09:48.874228  509076 cache_images.go:84] Images are preloaded, skipping loading
	I1025 22:09:48.874288  509076 ssh_runner.go:195] Run: sudo crictl info
	I1025 22:09:48.919858  509076 cni.go:84] Creating CNI manager for ""
	I1025 22:09:48.919868  509076 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1025 22:09:48.919890  509076 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 22:09:48.919907  509076 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-460131 NodeName:scheduled-stop-460131 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 22:09:48.920050  509076 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "scheduled-stop-460131"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 22:09:48.920119  509076 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=scheduled-stop-460131 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:scheduled-stop-460131 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 22:09:48.920180  509076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 22:09:48.931139  509076 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 22:09:48.931217  509076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 22:09:48.941750  509076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (393 bytes)
	I1025 22:09:48.963496  509076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 22:09:48.985512  509076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2110 bytes)
	I1025 22:09:49.007554  509076 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1025 22:09:49.012356  509076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:09:49.026581  509076 certs.go:56] Setting up /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131 for IP: 192.168.67.2
	I1025 22:09:49.026601  509076 certs.go:190] acquiring lock for shared ca certs: {Name:mkce8239dfcf921c4b21f688c78784f182dcce0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:09:49.026745  509076 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17488-401064/.minikube/ca.key
	I1025 22:09:49.026796  509076 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.key
	I1025 22:09:49.026844  509076 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/client.key
	I1025 22:09:49.026852  509076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/client.crt with IP's: []
	I1025 22:09:49.373633  509076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/client.crt ...
	I1025 22:09:49.373653  509076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/client.crt: {Name:mk886dacb95f77edee96bf791813952c56ce04c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:09:49.373870  509076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/client.key ...
	I1025 22:09:49.373877  509076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/client.key: {Name:mkf2e93e3b43018216fc8cde34b333223e922b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:09:49.373987  509076 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/apiserver.key.c7fa3a9e
	I1025 22:09:49.373999  509076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 22:09:50.107529  509076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/apiserver.crt.c7fa3a9e ...
	I1025 22:09:50.107545  509076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/apiserver.crt.c7fa3a9e: {Name:mk096ee15293c9ab81bb59ea1e0d92045c397d3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:09:50.107761  509076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/apiserver.key.c7fa3a9e ...
	I1025 22:09:50.107770  509076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/apiserver.key.c7fa3a9e: {Name:mk4e8f7fbe66cde1fc1c15f07959f5eafe8570aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:09:50.107857  509076 certs.go:337] copying /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/apiserver.crt
	I1025 22:09:50.107925  509076 certs.go:341] copying /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/apiserver.key
	I1025 22:09:50.107977  509076 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/proxy-client.key
	I1025 22:09:50.107988  509076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/proxy-client.crt with IP's: []
	I1025 22:09:50.463085  509076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/proxy-client.crt ...
	I1025 22:09:50.463100  509076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/proxy-client.crt: {Name:mka21c1a7ea44983993d615e4409496521c3f459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:09:50.463304  509076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/proxy-client.key ...
	I1025 22:09:50.463311  509076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/proxy-client.key: {Name:mk4e306d2a8dda2b7582eec35cd3b41c3f129117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:09:50.463508  509076 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/406453.pem (1338 bytes)
	W1025 22:09:50.463549  509076 certs.go:433] ignoring /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/406453_empty.pem, impossibly tiny 0 bytes
	I1025 22:09:50.463559  509076 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 22:09:50.463583  509076 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/ca.pem (1082 bytes)
	I1025 22:09:50.463606  509076 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/cert.pem (1123 bytes)
	I1025 22:09:50.463631  509076 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/certs/home/jenkins/minikube-integration/17488-401064/.minikube/certs/key.pem (1675 bytes)
	I1025 22:09:50.463674  509076 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-401064/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17488-401064/.minikube/files/etc/ssl/certs/4064532.pem (1708 bytes)
	I1025 22:09:50.464303  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 22:09:50.492984  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 22:09:50.526490  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 22:09:50.555783  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/scheduled-stop-460131/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 22:09:50.584239  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 22:09:50.613205  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 22:09:50.641996  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 22:09:50.671308  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 22:09:50.700608  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/files/etc/ssl/certs/4064532.pem --> /usr/share/ca-certificates/4064532.pem (1708 bytes)
	I1025 22:09:50.729683  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 22:09:50.759027  509076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-401064/.minikube/certs/406453.pem --> /usr/share/ca-certificates/406453.pem (1338 bytes)
	I1025 22:09:50.788552  509076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 22:09:50.809462  509076 ssh_runner.go:195] Run: openssl version
	I1025 22:09:50.816749  509076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4064532.pem && ln -fs /usr/share/ca-certificates/4064532.pem /etc/ssl/certs/4064532.pem"
	I1025 22:09:50.828641  509076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4064532.pem
	I1025 22:09:50.833135  509076 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:47 /usr/share/ca-certificates/4064532.pem
	I1025 22:09:50.833191  509076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4064532.pem
	I1025 22:09:50.841870  509076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4064532.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 22:09:50.853801  509076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 22:09:50.865545  509076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:09:50.870203  509076 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:09:50.870259  509076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:09:50.878832  509076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 22:09:50.890740  509076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/406453.pem && ln -fs /usr/share/ca-certificates/406453.pem /etc/ssl/certs/406453.pem"
	I1025 22:09:50.902571  509076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/406453.pem
	I1025 22:09:50.907355  509076 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:47 /usr/share/ca-certificates/406453.pem
	I1025 22:09:50.907411  509076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/406453.pem
	I1025 22:09:50.916296  509076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/406453.pem /etc/ssl/certs/51391683.0"
	I1025 22:09:50.928406  509076 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 22:09:50.932848  509076 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 22:09:50.932893  509076 kubeadm.go:404] StartCluster: {Name:scheduled-stop-460131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:scheduled-stop-460131 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 22:09:50.932969  509076 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1025 22:09:50.933030  509076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 22:09:50.976176  509076 cri.go:89] found id: ""
	I1025 22:09:50.976238  509076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 22:09:50.987373  509076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:09:50.998178  509076 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 22:09:50.998230  509076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:09:51.010752  509076 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:09:51.010790  509076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 22:09:51.068616  509076 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1025 22:09:51.068872  509076 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 22:09:51.118603  509076 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1025 22:09:51.118674  509076 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-aws
	I1025 22:09:51.118715  509076 kubeadm.go:322] OS: Linux
	I1025 22:09:51.118769  509076 kubeadm.go:322] CGROUPS_CPU: enabled
	I1025 22:09:51.118824  509076 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1025 22:09:51.118868  509076 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1025 22:09:51.118922  509076 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1025 22:09:51.118976  509076 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1025 22:09:51.119023  509076 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1025 22:09:51.119078  509076 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1025 22:09:51.119136  509076 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1025 22:09:51.119195  509076 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1025 22:09:51.203710  509076 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 22:09:51.203805  509076 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 22:09:51.203891  509076 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 22:09:51.462153  509076 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 22:09:51.465199  509076 out.go:204]   - Generating certificates and keys ...
	I1025 22:09:51.465337  509076 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 22:09:51.465471  509076 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 22:09:51.600472  509076 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 22:09:51.720631  509076 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 22:09:53.287412  509076 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 22:09:54.203515  509076 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 22:09:54.722666  509076 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 22:09:54.723030  509076 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-460131] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1025 22:09:55.536764  509076 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 22:09:55.537085  509076 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-460131] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1025 22:09:55.888340  509076 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 22:09:56.332325  509076 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 22:09:56.936300  509076 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 22:09:56.936612  509076 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 22:09:57.398286  509076 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 22:09:57.680245  509076 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 22:09:57.828548  509076 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 22:09:58.629930  509076 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 22:09:58.631132  509076 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 22:09:58.633510  509076 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 22:09:58.635750  509076 out.go:204]   - Booting up control plane ...
	I1025 22:09:58.635853  509076 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 22:09:58.636241  509076 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 22:09:58.637868  509076 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 22:09:58.653050  509076 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 22:09:58.653925  509076 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 22:09:58.654193  509076 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 22:09:58.773788  509076 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 22:10:06.276215  509076 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.505445 seconds
	I1025 22:10:06.276320  509076 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 22:10:06.291167  509076 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 22:10:06.822139  509076 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 22:10:06.822608  509076 kubeadm.go:322] [mark-control-plane] Marking the node scheduled-stop-460131 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 22:10:07.333995  509076 kubeadm.go:322] [bootstrap-token] Using token: uduorg.wu9bp3xw5xooup01
	I1025 22:10:07.336040  509076 out.go:204]   - Configuring RBAC rules ...
	I1025 22:10:07.336174  509076 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 22:10:07.342271  509076 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 22:10:07.354390  509076 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 22:10:07.359411  509076 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 22:10:07.363347  509076 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 22:10:07.367467  509076 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 22:10:07.383170  509076 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 22:10:07.619472  509076 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1025 22:10:07.754476  509076 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1025 22:10:07.755645  509076 kubeadm.go:322] 
	I1025 22:10:07.755706  509076 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1025 22:10:07.755710  509076 kubeadm.go:322] 
	I1025 22:10:07.755790  509076 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1025 22:10:07.755794  509076 kubeadm.go:322] 
	I1025 22:10:07.755817  509076 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1025 22:10:07.755872  509076 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 22:10:07.755919  509076 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 22:10:07.755923  509076 kubeadm.go:322] 
	I1025 22:10:07.755973  509076 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1025 22:10:07.755977  509076 kubeadm.go:322] 
	I1025 22:10:07.756021  509076 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 22:10:07.756044  509076 kubeadm.go:322] 
	I1025 22:10:07.756092  509076 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1025 22:10:07.756167  509076 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 22:10:07.756239  509076 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 22:10:07.756243  509076 kubeadm.go:322] 
	I1025 22:10:07.756321  509076 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 22:10:07.756392  509076 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1025 22:10:07.756396  509076 kubeadm.go:322] 
	I1025 22:10:07.756476  509076 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uduorg.wu9bp3xw5xooup01 \
	I1025 22:10:07.756572  509076 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8fc893b1bfb9893856fcf0c2057305a384d09e522e58c2d24ef7688104c1c0c8 \
	I1025 22:10:07.757023  509076 kubeadm.go:322] 	--control-plane 
	I1025 22:10:07.757031  509076 kubeadm.go:322] 
	I1025 22:10:07.757127  509076 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1025 22:10:07.757132  509076 kubeadm.go:322] 
	I1025 22:10:07.757207  509076 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uduorg.wu9bp3xw5xooup01 \
	I1025 22:10:07.757302  509076 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8fc893b1bfb9893856fcf0c2057305a384d09e522e58c2d24ef7688104c1c0c8 
	I1025 22:10:07.761708  509076 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-aws\n", err: exit status 1
	I1025 22:10:07.761812  509076 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 22:10:07.761828  509076 cni.go:84] Creating CNI manager for ""
	I1025 22:10:07.761837  509076 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1025 22:10:07.763650  509076 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1025 22:10:07.765355  509076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 22:10:07.771555  509076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1025 22:10:07.771566  509076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1025 22:10:07.814612  509076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 22:10:08.804396  509076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 22:10:08.804531  509076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:10:08.804608  509076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc minikube.k8s.io/name=scheduled-stop-460131 minikube.k8s.io/updated_at=2023_10_25T22_10_08_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:10:09.034673  509076 ops.go:34] apiserver oom_adj: -16
	I1025 22:10:09.034689  509076 kubeadm.go:1081] duration metric: took 230.213487ms to wait for elevateKubeSystemPrivileges.
	I1025 22:10:09.034702  509076 kubeadm.go:406] StartCluster complete in 18.101813472s
	I1025 22:10:09.034718  509076 settings.go:142] acquiring lock: {Name:mk9df4aad1a9be3e880e7cbb06d6b12a9835859c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:10:09.034784  509076 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17488-401064/kubeconfig
	I1025 22:10:09.035584  509076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-401064/kubeconfig: {Name:mk815098196b1e4c9adc580a5ae817d2d2e4d151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:10:09.037411  509076 config.go:182] Loaded profile config "scheduled-stop-460131": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1025 22:10:09.037466  509076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 22:10:09.037616  509076 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 22:10:09.037684  509076 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-460131"
	I1025 22:10:09.037698  509076 addons.go:231] Setting addon storage-provisioner=true in "scheduled-stop-460131"
	I1025 22:10:09.037800  509076 host.go:66] Checking if "scheduled-stop-460131" exists ...
	I1025 22:10:09.038287  509076 cli_runner.go:164] Run: docker container inspect scheduled-stop-460131 --format={{.State.Status}}
	I1025 22:10:09.039082  509076 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-460131"
	I1025 22:10:09.039099  509076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-460131"
	I1025 22:10:09.039401  509076 cli_runner.go:164] Run: docker container inspect scheduled-stop-460131 --format={{.State.Status}}
	I1025 22:10:09.097539  509076 addons.go:231] Setting addon default-storageclass=true in "scheduled-stop-460131"
	I1025 22:10:09.097570  509076 host.go:66] Checking if "scheduled-stop-460131" exists ...
	I1025 22:10:09.098039  509076 cli_runner.go:164] Run: docker container inspect scheduled-stop-460131 --format={{.State.Status}}
	I1025 22:10:09.100108  509076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:10:09.098872  509076 kapi.go:248] "coredns" deployment in "kube-system" namespace and "scheduled-stop-460131" context rescaled to 1 replicas
	I1025 22:10:09.101677  509076 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1025 22:10:09.103482  509076 out.go:177] * Verifying Kubernetes components...
	I1025 22:10:09.101772  509076 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:10:09.111959  509076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 22:10:09.112029  509076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-460131
	I1025 22:10:09.112030  509076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:10:09.132207  509076 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 22:10:09.132220  509076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 22:10:09.132287  509076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-460131
	I1025 22:10:09.161997  509076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/scheduled-stop-460131/id_rsa Username:docker}
	I1025 22:10:09.179852  509076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/scheduled-stop-460131/id_rsa Username:docker}
	I1025 22:10:09.259116  509076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 22:10:09.260082  509076 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:10:09.260131  509076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:10:09.319091  509076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 22:10:09.368675  509076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:10:09.758948  509076 api_server.go:72] duration metric: took 657.231943ms to wait for apiserver process to appear ...
	I1025 22:10:09.758963  509076 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:10:09.758978  509076 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1025 22:10:09.759294  509076 start.go:926] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I1025 22:10:09.774759  509076 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1025 22:10:09.776242  509076 api_server.go:141] control plane version: v1.28.3
	I1025 22:10:09.776257  509076 api_server.go:131] duration metric: took 17.28957ms to wait for apiserver health ...
	I1025 22:10:09.776273  509076 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:10:09.786880  509076 system_pods.go:59] 4 kube-system pods found
	I1025 22:10:09.786908  509076 system_pods.go:61] "etcd-scheduled-stop-460131" [309b8c12-778e-4477-958d-bdecd9c6e4c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 22:10:09.786918  509076 system_pods.go:61] "kube-apiserver-scheduled-stop-460131" [2e6416b7-6691-41a7-97c9-546e5ca3a2e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 22:10:09.786930  509076 system_pods.go:61] "kube-controller-manager-scheduled-stop-460131" [3141430d-09a6-43d3-9520-e3d590669ddd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 22:10:09.786939  509076 system_pods.go:61] "kube-scheduler-scheduled-stop-460131" [79794e4a-4caa-45cf-8c3d-5dd07c61f65d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 22:10:09.786947  509076 system_pods.go:74] duration metric: took 10.66682ms to wait for pod list to return data ...
	I1025 22:10:09.786957  509076 kubeadm.go:581] duration metric: took 685.25024ms to wait for : map[apiserver:true system_pods:true] ...
	I1025 22:10:09.786972  509076 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:10:09.791062  509076 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 22:10:09.791080  509076 node_conditions.go:123] node cpu capacity is 2
	I1025 22:10:09.791090  509076 node_conditions.go:105] duration metric: took 4.113354ms to run NodePressure ...
	I1025 22:10:09.791100  509076 start.go:228] waiting for startup goroutines ...
	I1025 22:10:10.044006  509076 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1025 22:10:10.046178  509076 addons.go:502] enable addons completed in 1.008545872s: enabled=[default-storageclass storage-provisioner]
	I1025 22:10:10.046225  509076 start.go:233] waiting for cluster config update ...
	I1025 22:10:10.046238  509076 start.go:242] writing updated cluster config ...
	I1025 22:10:10.046596  509076 ssh_runner.go:195] Run: rm -f paused
	I1025 22:10:10.116199  509076 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1025 22:10:10.122448  509076 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-460131" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	eea8133a73d72       9cdd6470f48c8       11 seconds ago      Running             etcd                      0                   14e0e808ec3d0       etcd-scheduled-stop-460131
	f998549054490       42a4e73724daa       11 seconds ago      Running             kube-scheduler            0                   22acd2098d6a6       kube-scheduler-scheduled-stop-460131
	4ccd45378bd2b       537e9a59ee2fd       11 seconds ago      Running             kube-apiserver            0                   35711a5f12482       kube-apiserver-scheduled-stop-460131
	b4788292fbd1d       8276439b4f237       11 seconds ago      Running             kube-controller-manager   0                   72e8f2c102021       kube-controller-manager-scheduled-stop-460131
	
	* 
	* ==> containerd <==
	* Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.340490374Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/14e0e808ec3d072763bbead00046fa3ac5bb549711af3d455a8cfc3d453b2ec0 pid=1096 runtime=io.containerd.runc.v2
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.344178153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.344584051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.344735722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.345752768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22acd2098d6a6c7f794cce6610a497c3ade1ced34ca06d1d40768f4818435cd2 pid=1090 runtime=io.containerd.runc.v2
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.454124252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-scheduled-stop-460131,Uid:423a79cdac43ec27a46d60dc29afb685,Namespace:kube-system,Attempt:0,} returns sandbox id \"72e8f2c102021e84a23218e0694f9a1fdf8239623a64b422216d8a397ca71fda\""
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.457887519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-scheduled-stop-460131,Uid:09588a80e5f1d0a1749abe0f115c5aae,Namespace:kube-system,Attempt:0,} returns sandbox id \"35711a5f12482a2c4f0670a08bb7fdb41f90d8f9e28a8e7e5e9c311bd21c370a\""
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.460391304Z" level=info msg="CreateContainer within sandbox \"72e8f2c102021e84a23218e0694f9a1fdf8239623a64b422216d8a397ca71fda\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.479135563Z" level=info msg="CreateContainer within sandbox \"35711a5f12482a2c4f0670a08bb7fdb41f90d8f9e28a8e7e5e9c311bd21c370a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.507554474Z" level=info msg="CreateContainer within sandbox \"72e8f2c102021e84a23218e0694f9a1fdf8239623a64b422216d8a397ca71fda\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b4788292fbd1d3b9747785cd9e4bc32cdff948eac5baa46acde52dea5a0e7712\""
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.509229667Z" level=info msg="StartContainer for \"b4788292fbd1d3b9747785cd9e4bc32cdff948eac5baa46acde52dea5a0e7712\""
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.518568393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-scheduled-stop-460131,Uid:d94ced0581b1c24cc2f33376188f66ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"22acd2098d6a6c7f794cce6610a497c3ade1ced34ca06d1d40768f4818435cd2\""
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.520219185Z" level=info msg="CreateContainer within sandbox \"35711a5f12482a2c4f0670a08bb7fdb41f90d8f9e28a8e7e5e9c311bd21c370a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4ccd45378bd2b5613b23136028aa3558b2dd665975a180b006f5b46956a07600\""
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.521422380Z" level=info msg="StartContainer for \"4ccd45378bd2b5613b23136028aa3558b2dd665975a180b006f5b46956a07600\""
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.522674001Z" level=info msg="CreateContainer within sandbox \"22acd2098d6a6c7f794cce6610a497c3ade1ced34ca06d1d40768f4818435cd2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.534657788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-scheduled-stop-460131,Uid:1f93dc3034bc36d273d5dcba1e8b642e,Namespace:kube-system,Attempt:0,} returns sandbox id \"14e0e808ec3d072763bbead00046fa3ac5bb549711af3d455a8cfc3d453b2ec0\""
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.540177076Z" level=info msg="CreateContainer within sandbox \"14e0e808ec3d072763bbead00046fa3ac5bb549711af3d455a8cfc3d453b2ec0\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.573954003Z" level=info msg="CreateContainer within sandbox \"14e0e808ec3d072763bbead00046fa3ac5bb549711af3d455a8cfc3d453b2ec0\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"eea8133a73d72e4e5a1ad56ef3d93071d2869da9a130bd1e300a61d8c92f1c98\""
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.576597529Z" level=info msg="CreateContainer within sandbox \"22acd2098d6a6c7f794cce6610a497c3ade1ced34ca06d1d40768f4818435cd2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f9985490544900a0d54d116c21a7100833e0a2ebb40c4401622fa0dca61e3782\""
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.577766509Z" level=info msg="StartContainer for \"f9985490544900a0d54d116c21a7100833e0a2ebb40c4401622fa0dca61e3782\""
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.579944838Z" level=info msg="StartContainer for \"eea8133a73d72e4e5a1ad56ef3d93071d2869da9a130bd1e300a61d8c92f1c98\""
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.657325139Z" level=info msg="StartContainer for \"4ccd45378bd2b5613b23136028aa3558b2dd665975a180b006f5b46956a07600\" returns successfully"
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.668999430Z" level=info msg="StartContainer for \"b4788292fbd1d3b9747785cd9e4bc32cdff948eac5baa46acde52dea5a0e7712\" returns successfully"
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.929193936Z" level=info msg="StartContainer for \"f9985490544900a0d54d116c21a7100833e0a2ebb40c4401622fa0dca61e3782\" returns successfully"
	Oct 25 22:10:00 scheduled-stop-460131 containerd[750]: time="2023-10-25T22:10:00.939101382Z" level=info msg="StartContainer for \"eea8133a73d72e4e5a1ad56ef3d93071d2869da9a130bd1e300a61d8c92f1c98\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               scheduled-stop-460131
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-460131
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=scheduled-stop-460131
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T22_10_08_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Oct 2023 22:10:04 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-460131
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Oct 2023 22:10:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Oct 2023 22:10:08 +0000   Wed, 25 Oct 2023 22:10:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Oct 2023 22:10:08 +0000   Wed, 25 Oct 2023 22:10:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Oct 2023 22:10:08 +0000   Wed, 25 Oct 2023 22:10:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Oct 2023 22:10:08 +0000   Wed, 25 Oct 2023 22:10:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    scheduled-stop-460131
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b02155a12924a03a7f6525dd961167b
	  System UUID:                e77c839e-a2d2-48fb-af85-590a5f9f9405
	  Boot ID:                    dc9d99ba-cdb2-4b53-84d7-7ab685ba34f1
	  Kernel Version:             5.15.0-1048-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.24
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-460131                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4s
	  kube-system                 kube-apiserver-scheduled-stop-460131             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-460131    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-scheduled-stop-460131             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  13s (x8 over 13s)  kubelet  Node scheduled-stop-460131 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet  Node scheduled-stop-460131 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x7 over 13s)  kubelet  Node scheduled-stop-460131 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 5s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  5s                 kubelet  Node scheduled-stop-460131 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5s                 kubelet  Node scheduled-stop-460131 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5s                 kubelet  Node scheduled-stop-460131 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             5s                 kubelet  Node scheduled-stop-460131 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                 kubelet  Node scheduled-stop-460131 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000733] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000952] FS-Cache: N-cookie d=00000000660f3c89{9p.inode} n=00000000d337f5d3
	[  +0.001074] FS-Cache: N-key=[8] 'e53a5c0100000000'
	[  +2.898689] FS-Cache: Duplicate cookie detected
	[  +0.000721] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001017] FS-Cache: O-cookie d=00000000660f3c89{9p.inode} n=00000000dd16fe58
	[  +0.001116] FS-Cache: O-key=[8] 'e43a5c0100000000'
	[  +0.000745] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000660f3c89{9p.inode} n=000000007e24298e
	[  +0.001122] FS-Cache: N-key=[8] 'e43a5c0100000000'
	[  +0.398811] FS-Cache: Duplicate cookie detected
	[  +0.000716] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001070] FS-Cache: O-cookie d=00000000660f3c89{9p.inode} n=000000003c868379
	[  +0.001081] FS-Cache: O-key=[8] 'ea3a5c0100000000'
	[  +0.000712] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000950] FS-Cache: N-cookie d=00000000660f3c89{9p.inode} n=00000000f33d2ab3
	[  +0.001092] FS-Cache: N-key=[8] 'ea3a5c0100000000'
	[  +3.995332] FS-Cache: Duplicate cookie detected
	[  +0.000761] FS-Cache: O-cookie c=00000024 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001051] FS-Cache: O-cookie d=000000002ab2478e{9P.session} n=00000000573aea2f
	[  +0.001110] FS-Cache: O-key=[10] '34323936323930373639'
	[  +0.000782] FS-Cache: N-cookie c=00000025 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=000000002ab2478e{9P.session} n=00000000bfeaf0de
	[  +0.001070] FS-Cache: N-key=[10] '34323936323930373639'
	[Oct25 21:51] new mount options do not match the existing superblock, will be ignored
	
	* 
	* ==> etcd [eea8133a73d72e4e5a1ad56ef3d93071d2869da9a130bd1e300a61d8c92f1c98] <==
	* {"level":"info","ts":"2023-10-25T22:10:01.109534Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-25T22:10:01.109564Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-25T22:10:01.092529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-10-25T22:10:01.109775Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-10-25T22:10:01.092627Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-25T22:10:01.109867Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-25T22:10:01.091698Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8688e899f7831fc7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-10-25T22:10:01.545537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-25T22:10:01.545705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-25T22:10:01.545836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2023-10-25T22:10:01.54593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2023-10-25T22:10:01.546023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-10-25T22:10:01.546114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2023-10-25T22:10:01.546209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-10-25T22:10:01.549252Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T22:10:01.557472Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:scheduled-stop-460131 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-25T22:10:01.557677Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-25T22:10:01.559127Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-25T22:10:01.565096Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-25T22:10:01.57433Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-25T22:10:01.569155Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T22:10:01.569506Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-25T22:10:01.574755Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T22:10:01.575041Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T22:10:01.584206Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:10:12 up  1:52,  0 users,  load average: 1.59, 1.56, 1.97
	Linux scheduled-stop-460131 5.15.0-1048-aws #53~20.04.1-Ubuntu SMP Wed Oct 4 16:51:38 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [4ccd45378bd2b5613b23136028aa3558b2dd665975a180b006f5b46956a07600] <==
	* I1025 22:10:04.518755       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 22:10:04.518763       1 cache.go:39] Caches are synced for autoregister controller
	I1025 22:10:04.547333       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1025 22:10:04.555163       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I1025 22:10:04.555586       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 22:10:04.563317       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 22:10:04.563563       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 22:10:04.568298       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 22:10:04.572876       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 22:10:04.573108       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1025 22:10:04.595577       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1025 22:10:04.799074       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 22:10:05.161683       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 22:10:05.166056       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 22:10:05.166087       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 22:10:05.743102       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 22:10:05.788364       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 22:10:05.912633       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 22:10:05.919647       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I1025 22:10:05.920963       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 22:10:05.925516       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 22:10:06.463134       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 22:10:07.600747       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 22:10:07.617094       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 22:10:07.631719       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [b4788292fbd1d3b9747785cd9e4bc32cdff948eac5baa46acde52dea5a0e7712] <==
	* I1025 22:10:08.065792       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I1025 22:10:08.065904       1 disruption.go:437] "Sending events to api server."
	I1025 22:10:08.065945       1 disruption.go:448] "Starting disruption controller"
	I1025 22:10:08.065955       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I1025 22:10:08.208186       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I1025 22:10:08.208301       1 ttl_controller.go:124] "Starting TTL controller"
	I1025 22:10:08.208386       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I1025 22:10:08.358461       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1025 22:10:08.358845       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I1025 22:10:08.359040       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I1025 22:10:08.508188       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I1025 22:10:08.508392       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I1025 22:10:08.508501       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I1025 22:10:08.660981       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I1025 22:10:08.661159       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I1025 22:10:08.661170       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I1025 22:10:08.811522       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I1025 22:10:08.811599       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I1025 22:10:08.957340       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I1025 22:10:08.957435       1 gc_controller.go:103] "Starting GC controller"
	I1025 22:10:08.957445       1 shared_informer.go:311] Waiting for caches to sync for GC
	I1025 22:10:09.122031       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I1025 22:10:09.122104       1 tokencleaner.go:112] "Starting token cleaner controller"
	I1025 22:10:09.122115       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I1025 22:10:09.122122       1 shared_informer.go:318] Caches are synced for token_cleaner
	
	* 
	* ==> kube-scheduler [f9985490544900a0d54d116c21a7100833e0a2ebb40c4401622fa0dca61e3782] <==
	* W1025 22:10:04.593845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 22:10:04.593975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 22:10:04.597596       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 22:10:04.597777       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 22:10:04.601431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 22:10:04.601669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1025 22:10:04.601852       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 22:10:04.601934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 22:10:04.602108       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 22:10:04.602196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 22:10:04.602375       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 22:10:04.602460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 22:10:04.602615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 22:10:04.602698       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 22:10:04.602863       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 22:10:04.602993       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1025 22:10:05.448491       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 22:10:05.448890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 22:10:05.453358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 22:10:05.453395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 22:10:05.484739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 22:10:05.484964       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 22:10:05.772765       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 22:10:05.772994       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1025 22:10:07.679731       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.091068    1377 topology_manager.go:215] "Topology Admit Handler" podUID="09588a80e5f1d0a1749abe0f115c5aae" podNamespace="kube-system" podName="kube-apiserver-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.091384    1377 topology_manager.go:215] "Topology Admit Handler" podUID="423a79cdac43ec27a46d60dc29afb685" podNamespace="kube-system" podName="kube-controller-manager-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.091535    1377 topology_manager.go:215] "Topology Admit Handler" podUID="d94ced0581b1c24cc2f33376188f66ea" podNamespace="kube-system" podName="kube-scheduler-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.091657    1377 topology_manager.go:215] "Topology Admit Handler" podUID="1f93dc3034bc36d273d5dcba1e8b642e" podNamespace="kube-system" podName="etcd-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167030    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09588a80e5f1d0a1749abe0f115c5aae-k8s-certs\") pod \"kube-apiserver-scheduled-stop-460131\" (UID: \"09588a80e5f1d0a1749abe0f115c5aae\") " pod="kube-system/kube-apiserver-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167102    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09588a80e5f1d0a1749abe0f115c5aae-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-460131\" (UID: \"09588a80e5f1d0a1749abe0f115c5aae\") " pod="kube-system/kube-apiserver-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167139    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09588a80e5f1d0a1749abe0f115c5aae-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-460131\" (UID: \"09588a80e5f1d0a1749abe0f115c5aae\") " pod="kube-system/kube-apiserver-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167166    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/423a79cdac43ec27a46d60dc29afb685-ca-certs\") pod \"kube-controller-manager-scheduled-stop-460131\" (UID: \"423a79cdac43ec27a46d60dc29afb685\") " pod="kube-system/kube-controller-manager-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167198    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/423a79cdac43ec27a46d60dc29afb685-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-460131\" (UID: \"423a79cdac43ec27a46d60dc29afb685\") " pod="kube-system/kube-controller-manager-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167225    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/1f93dc3034bc36d273d5dcba1e8b642e-etcd-certs\") pod \"etcd-scheduled-stop-460131\" (UID: \"1f93dc3034bc36d273d5dcba1e8b642e\") " pod="kube-system/etcd-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167251    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09588a80e5f1d0a1749abe0f115c5aae-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-460131\" (UID: \"09588a80e5f1d0a1749abe0f115c5aae\") " pod="kube-system/kube-apiserver-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167273    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d94ced0581b1c24cc2f33376188f66ea-kubeconfig\") pod \"kube-scheduler-scheduled-stop-460131\" (UID: \"d94ced0581b1c24cc2f33376188f66ea\") " pod="kube-system/kube-scheduler-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167300    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/423a79cdac43ec27a46d60dc29afb685-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-460131\" (UID: \"423a79cdac43ec27a46d60dc29afb685\") " pod="kube-system/kube-controller-manager-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167330    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/1f93dc3034bc36d273d5dcba1e8b642e-etcd-data\") pod \"etcd-scheduled-stop-460131\" (UID: \"1f93dc3034bc36d273d5dcba1e8b642e\") " pod="kube-system/etcd-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167357    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09588a80e5f1d0a1749abe0f115c5aae-ca-certs\") pod \"kube-apiserver-scheduled-stop-460131\" (UID: \"09588a80e5f1d0a1749abe0f115c5aae\") " pod="kube-system/kube-apiserver-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167379    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/423a79cdac43ec27a46d60dc29afb685-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-460131\" (UID: \"423a79cdac43ec27a46d60dc29afb685\") " pod="kube-system/kube-controller-manager-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167404    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/423a79cdac43ec27a46d60dc29afb685-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-460131\" (UID: \"423a79cdac43ec27a46d60dc29afb685\") " pod="kube-system/kube-controller-manager-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167435    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/423a79cdac43ec27a46d60dc29afb685-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-460131\" (UID: \"423a79cdac43ec27a46d60dc29afb685\") " pod="kube-system/kube-controller-manager-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.167463    1377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/423a79cdac43ec27a46d60dc29afb685-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-460131\" (UID: \"423a79cdac43ec27a46d60dc29afb685\") " pod="kube-system/kube-controller-manager-scheduled-stop-460131"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.696928    1377 apiserver.go:52] "Watching apiserver"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.763719    1377 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.955701    1377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-460131" podStartSLOduration=0.955582494 podCreationTimestamp="2023-10-25 22:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 22:10:08.940926669 +0000 UTC m=+1.368046949" watchObservedRunningTime="2023-10-25 22:10:08.955582494 +0000 UTC m=+1.382702774"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.970625    1377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-460131" podStartSLOduration=0.97056311 podCreationTimestamp="2023-10-25 22:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 22:10:08.957540648 +0000 UTC m=+1.384660928" watchObservedRunningTime="2023-10-25 22:10:08.97056311 +0000 UTC m=+1.397683439"
	Oct 25 22:10:08 scheduled-stop-460131 kubelet[1377]: I1025 22:10:08.988933    1377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-460131" podStartSLOduration=0.988874952 podCreationTimestamp="2023-10-25 22:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 22:10:08.970864786 +0000 UTC m=+1.397985058" watchObservedRunningTime="2023-10-25 22:10:08.988874952 +0000 UTC m=+1.415995232"
	Oct 25 22:10:09 scheduled-stop-460131 kubelet[1377]: I1025 22:10:09.006477    1377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-460131" podStartSLOduration=1.006409124 podCreationTimestamp="2023-10-25 22:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 22:10:08.989829878 +0000 UTC m=+1.416950158" watchObservedRunningTime="2023-10-25 22:10:09.006409124 +0000 UTC m=+1.433529413"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-460131 -n scheduled-stop-460131
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-460131 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-460131 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-460131 describe pod storage-provisioner: exit status 1 (94.29083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-460131 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-460131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-460131
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-460131: (2.049560238s)
--- FAIL: TestScheduledStopUnix (38.69s)

                                                
                                    

Test pass (272/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.07
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.44
10 TestDownloadOnly/v1.28.3/json-events 10.61
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.11
16 TestDownloadOnly/DeleteAll 0.25
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
19 TestBinaryMirror 0.62
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
25 TestAddons/Setup 142.24
27 TestAddons/parallel/Registry 15.1
29 TestAddons/parallel/InspektorGadget 11.13
30 TestAddons/parallel/MetricsServer 5.89
33 TestAddons/parallel/CSI 45.17
34 TestAddons/parallel/Headlamp 11.37
35 TestAddons/parallel/CloudSpanner 5.66
36 TestAddons/parallel/LocalPath 52.98
37 TestAddons/parallel/NvidiaDevicePlugin 5.67
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/StoppedEnableDisable 12.44
42 TestCertOptions 41.66
43 TestCertExpiration 237.18
45 TestForceSystemdFlag 39.08
46 TestForceSystemdEnv 41.1
47 TestDockerEnvContainerd 49.93
52 TestErrorSpam/setup 30.09
53 TestErrorSpam/start 0.9
54 TestErrorSpam/status 1.2
55 TestErrorSpam/pause 1.86
56 TestErrorSpam/unpause 1.97
57 TestErrorSpam/stop 1.52
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 81.05
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 7.01
64 TestFunctional/serial/KubeContext 0.07
65 TestFunctional/serial/KubectlGetPods 0.1
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.92
69 TestFunctional/serial/CacheCmd/cache/add_local 1.62
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.4
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.33
74 TestFunctional/serial/CacheCmd/cache/delete 0.15
75 TestFunctional/serial/MinikubeKubectlCmd 0.17
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
77 TestFunctional/serial/ExtraConfig 44.84
78 TestFunctional/serial/ComponentHealth 0.12
79 TestFunctional/serial/LogsCmd 1.79
80 TestFunctional/serial/LogsFileCmd 2.18
81 TestFunctional/serial/InvalidService 5.01
83 TestFunctional/parallel/ConfigCmd 0.62
84 TestFunctional/parallel/DashboardCmd 14.23
85 TestFunctional/parallel/DryRun 0.66
86 TestFunctional/parallel/InternationalLanguage 0.33
87 TestFunctional/parallel/StatusCmd 1.44
91 TestFunctional/parallel/ServiceCmdConnect 7.76
92 TestFunctional/parallel/AddonsCmd 0.23
93 TestFunctional/parallel/PersistentVolumeClaim 25.5
95 TestFunctional/parallel/SSHCmd 0.85
96 TestFunctional/parallel/CpCmd 1.6
98 TestFunctional/parallel/FileSync 0.4
99 TestFunctional/parallel/CertSync 2.4
103 TestFunctional/parallel/NodeLabels 0.09
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.82
107 TestFunctional/parallel/License 0.41
108 TestFunctional/parallel/Version/short 0.07
109 TestFunctional/parallel/Version/components 1.48
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
114 TestFunctional/parallel/ImageCommands/ImageBuild 3.66
115 TestFunctional/parallel/ImageCommands/Setup 2.47
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.3
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
120 TestFunctional/parallel/ServiceCmd/DeployApp 10.44
123 TestFunctional/parallel/ServiceCmd/List 0.47
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
126 TestFunctional/parallel/ServiceCmd/Format 0.51
128 TestFunctional/parallel/ServiceCmd/URL 0.55
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.82
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.46
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
144 TestFunctional/parallel/ProfileCmd/profile_list 0.55
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
146 TestFunctional/parallel/MountCmd/any-port 7.39
147 TestFunctional/parallel/MountCmd/specific-port 2.5
148 TestFunctional/parallel/MountCmd/VerifyCleanup 2.73
149 TestFunctional/delete_addon-resizer_images 0.09
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 92.57
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.52
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.69
162 TestJSONOutput/start/Command 87
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.84
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.78
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.8
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.26
187 TestKicCustomNetwork/create_custom_network 43.08
188 TestKicCustomNetwork/use_default_bridge_network 33.47
189 TestKicExistingNetwork 35.8
190 TestKicCustomSubnet 35.77
191 TestKicStaticIP 34.8
192 TestMainNoArgs 0.07
193 TestMinikubeProfile 68.34
196 TestMountStart/serial/StartWithMountFirst 6.43
197 TestMountStart/serial/VerifyMountFirst 0.3
198 TestMountStart/serial/StartWithMountSecond 6.56
199 TestMountStart/serial/VerifyMountSecond 0.3
200 TestMountStart/serial/DeleteFirst 1.7
201 TestMountStart/serial/VerifyMountPostDelete 0.3
202 TestMountStart/serial/Stop 1.23
203 TestMountStart/serial/RestartStopped 7.66
204 TestMountStart/serial/VerifyMountPostStop 0.3
207 TestMultiNode/serial/FreshStart2Nodes 103.51
208 TestMultiNode/serial/DeployApp2Nodes 4.97
209 TestMultiNode/serial/PingHostFrom2Pods 1.21
210 TestMultiNode/serial/AddNode 17.58
211 TestMultiNode/serial/ProfileList 0.38
212 TestMultiNode/serial/CopyFile 11.68
213 TestMultiNode/serial/StopNode 2.43
214 TestMultiNode/serial/StartAfterStop 12.39
215 TestMultiNode/serial/RestartKeepsNodes 121.21
216 TestMultiNode/serial/DeleteNode 5.39
217 TestMultiNode/serial/StopMultiNode 24.16
218 TestMultiNode/serial/RestartMultiNode 87.81
219 TestMultiNode/serial/ValidateNameConflict 36.54
224 TestPreload 140.79
229 TestInsufficientStorage 10.45
230 TestRunningBinaryUpgrade 86.04
232 TestKubernetesUpgrade 377.92
233 TestMissingContainerUpgrade 173.7
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
236 TestNoKubernetes/serial/StartWithK8s 39.45
237 TestNoKubernetes/serial/StartWithStopK8s 18.95
238 TestNoKubernetes/serial/Start 6.12
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.41
240 TestNoKubernetes/serial/ProfileList 1.17
241 TestNoKubernetes/serial/Stop 1.33
242 TestNoKubernetes/serial/StartNoArgs 7.54
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.54
244 TestStoppedBinaryUpgrade/Setup 1.34
245 TestStoppedBinaryUpgrade/Upgrade 111.75
246 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
255 TestPause/serial/Start 60.71
256 TestPause/serial/SecondStartNoReconfiguration 6.9
257 TestPause/serial/Pause 1.34
258 TestPause/serial/VerifyStatus 0.59
259 TestPause/serial/Unpause 1.24
260 TestPause/serial/PauseAgain 1.26
261 TestPause/serial/DeletePaused 3.11
262 TestPause/serial/VerifyDeletedResources 0.73
270 TestNetworkPlugins/group/false 5.23
275 TestStartStop/group/old-k8s-version/serial/FirstStart 133.95
276 TestStartStop/group/old-k8s-version/serial/DeployApp 9.56
277 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.11
278 TestStartStop/group/old-k8s-version/serial/Stop 12.26
279 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
280 TestStartStop/group/old-k8s-version/serial/SecondStart 658.81
282 TestStartStop/group/no-preload/serial/FirstStart 71.57
283 TestStartStop/group/no-preload/serial/DeployApp 8.53
284 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
285 TestStartStop/group/no-preload/serial/Stop 12.12
286 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
287 TestStartStop/group/no-preload/serial/SecondStart 339.02
288 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 15.03
289 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
290 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.37
291 TestStartStop/group/no-preload/serial/Pause 3.47
293 TestStartStop/group/embed-certs/serial/FirstStart 87.15
294 TestStartStop/group/embed-certs/serial/DeployApp 9.42
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.25
296 TestStartStop/group/embed-certs/serial/Stop 12.16
297 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
298 TestStartStop/group/embed-certs/serial/SecondStart 341.58
299 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
300 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
301 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.39
302 TestStartStop/group/old-k8s-version/serial/Pause 3.52
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 58.92
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.5
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.24
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.39
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 344.12
310 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.03
311 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
312 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.37
313 TestStartStop/group/embed-certs/serial/Pause 3.58
315 TestStartStop/group/newest-cni/serial/FirstStart 42.05
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.37
318 TestStartStop/group/newest-cni/serial/Stop 1.3
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
320 TestStartStop/group/newest-cni/serial/SecondStart 30.73
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
324 TestStartStop/group/newest-cni/serial/Pause 3.44
325 TestNetworkPlugins/group/auto/Start 85.76
326 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 8.03
327 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
328 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.39
329 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.72
330 TestNetworkPlugins/group/auto/KubeletFlags 0.46
331 TestNetworkPlugins/group/auto/NetCatPod 10.49
332 TestNetworkPlugins/group/kindnet/Start 86.13
333 TestNetworkPlugins/group/auto/DNS 0.28
334 TestNetworkPlugins/group/auto/Localhost 0.23
335 TestNetworkPlugins/group/auto/HairPin 0.26
336 TestNetworkPlugins/group/calico/Start 66.32
337 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
338 TestNetworkPlugins/group/kindnet/KubeletFlags 0.49
339 TestNetworkPlugins/group/kindnet/NetCatPod 11.53
340 TestNetworkPlugins/group/kindnet/DNS 0.23
341 TestNetworkPlugins/group/kindnet/Localhost 0.22
342 TestNetworkPlugins/group/kindnet/HairPin 0.23
343 TestNetworkPlugins/group/calico/ControllerPod 5.05
344 TestNetworkPlugins/group/calico/KubeletFlags 0.35
345 TestNetworkPlugins/group/calico/NetCatPod 10.51
346 TestNetworkPlugins/group/calico/DNS 0.29
347 TestNetworkPlugins/group/calico/Localhost 0.22
348 TestNetworkPlugins/group/calico/HairPin 0.28
349 TestNetworkPlugins/group/custom-flannel/Start 63.78
350 TestNetworkPlugins/group/enable-default-cni/Start 91.04
351 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
352 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.39
353 TestNetworkPlugins/group/custom-flannel/DNS 0.22
354 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
355 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
356 TestNetworkPlugins/group/flannel/Start 59.6
357 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.46
358 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.48
359 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
360 TestNetworkPlugins/group/enable-default-cni/Localhost 0.33
361 TestNetworkPlugins/group/enable-default-cni/HairPin 0.26
362 TestNetworkPlugins/group/bridge/Start 86.78
363 TestNetworkPlugins/group/flannel/ControllerPod 5.06
364 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
365 TestNetworkPlugins/group/flannel/NetCatPod 10.5
366 TestNetworkPlugins/group/flannel/DNS 0.24
367 TestNetworkPlugins/group/flannel/Localhost 0.27
368 TestNetworkPlugins/group/flannel/HairPin 0.31
369 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
370 TestNetworkPlugins/group/bridge/NetCatPod 8.32
371 TestNetworkPlugins/group/bridge/DNS 0.2
372 TestNetworkPlugins/group/bridge/Localhost 0.17
373 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (12.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-836857 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-836857 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.073614811s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-836857
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-836857: exit status 85 (439.401879ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-836857 | jenkins | v1.31.2 | 25 Oct 23 21:40 UTC |          |
	|         | -p download-only-836857        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 21:40:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:40:46.906108  406458 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:40:46.906289  406458 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:40:46.906300  406458 out.go:309] Setting ErrFile to fd 2...
	I1025 21:40:46.906306  406458 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:40:46.906566  406458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
	W1025 21:40:46.906697  406458 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17488-401064/.minikube/config/config.json: open /home/jenkins/minikube-integration/17488-401064/.minikube/config/config.json: no such file or directory
	I1025 21:40:46.907097  406458 out.go:303] Setting JSON to true
	I1025 21:40:46.907994  406458 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4984,"bootTime":1698265063,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 21:40:46.908067  406458 start.go:138] virtualization:  
	I1025 21:40:46.910936  406458 out.go:97] [download-only-836857] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	W1025 21:40:46.911179  406458 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 21:40:46.912979  406458 out.go:169] MINIKUBE_LOCATION=17488
	I1025 21:40:46.911311  406458 notify.go:220] Checking for updates...
	I1025 21:40:46.916752  406458 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:40:46.918711  406458 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	I1025 21:40:46.920522  406458 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	I1025 21:40:46.922367  406458 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1025 21:40:46.925650  406458 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 21:40:46.925911  406458 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:40:46.950006  406458 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:40:46.950107  406458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:40:47.037975  406458 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-10-25 21:40:47.027859477 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 21:40:47.038088  406458 docker.go:295] overlay module found
	I1025 21:40:47.040080  406458 out.go:97] Using the docker driver based on user configuration
	I1025 21:40:47.040107  406458 start.go:298] selected driver: docker
	I1025 21:40:47.040113  406458 start.go:902] validating driver "docker" against <nil>
	I1025 21:40:47.040228  406458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:40:47.104938  406458 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-10-25 21:40:47.095646484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 21:40:47.105199  406458 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 21:40:47.105488  406458 start_flags.go:386] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1025 21:40:47.105646  406458 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 21:40:47.107627  406458 out.go:169] Using Docker driver with root privileges
	I1025 21:40:47.109602  406458 cni.go:84] Creating CNI manager for ""
	I1025 21:40:47.109630  406458 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1025 21:40:47.109658  406458 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 21:40:47.109677  406458 start_flags.go:323] config:
	{Name:download-only-836857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-836857 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:40:47.111562  406458 out.go:97] Starting control plane node download-only-836857 in cluster download-only-836857
	I1025 21:40:47.111581  406458 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1025 21:40:47.113290  406458 out.go:97] Pulling base image ...
	I1025 21:40:47.113319  406458 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1025 21:40:47.113425  406458 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 21:40:47.130587  406458 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 21:40:47.130792  406458 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1025 21:40:47.130889  406458 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 21:40:47.176997  406458 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1025 21:40:47.177025  406458 cache.go:56] Caching tarball of preloaded images
	I1025 21:40:47.178389  406458 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1025 21:40:47.180303  406458 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1025 21:40:47.180320  406458 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I1025 21:40:47.301813  406458 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1025 21:40:51.897740  406458 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-836857"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (10.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-836857 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-836857 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.609419183s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (10.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-836857
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-836857: exit status 85 (106.419429ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-836857 | jenkins | v1.31.2 | 25 Oct 23 21:40 UTC |          |
	|         | -p download-only-836857        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-836857 | jenkins | v1.31.2 | 25 Oct 23 21:40 UTC |          |
	|         | -p download-only-836857        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 21:40:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:40:59.420197  406531 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:40:59.420365  406531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:40:59.420373  406531 out.go:309] Setting ErrFile to fd 2...
	I1025 21:40:59.420379  406531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:40:59.420640  406531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
	W1025 21:40:59.420769  406531 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17488-401064/.minikube/config/config.json: open /home/jenkins/minikube-integration/17488-401064/.minikube/config/config.json: no such file or directory
	I1025 21:40:59.420992  406531 out.go:303] Setting JSON to true
	I1025 21:40:59.421998  406531 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4997,"bootTime":1698265063,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 21:40:59.422071  406531 start.go:138] virtualization:  
	I1025 21:40:59.433648  406531 out.go:97] [download-only-836857] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1025 21:40:59.434016  406531 notify.go:220] Checking for updates...
	I1025 21:40:59.466046  406531 out.go:169] MINIKUBE_LOCATION=17488
	I1025 21:40:59.500467  406531 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:40:59.530533  406531 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	I1025 21:40:59.562334  406531 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	I1025 21:40:59.593711  406531 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1025 21:40:59.659107  406531 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 21:40:59.659768  406531 config.go:182] Loaded profile config "download-only-836857": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1025 21:40:59.659820  406531 start.go:810] api.Load failed for download-only-836857: filestore "download-only-836857": Docker machine "download-only-836857" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 21:40:59.659925  406531 driver.go:378] Setting default libvirt URI to qemu:///system
	W1025 21:40:59.659964  406531 start.go:810] api.Load failed for download-only-836857: filestore "download-only-836857": Docker machine "download-only-836857" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 21:40:59.683475  406531 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:40:59.683580  406531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:40:59.750146  406531 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-25 21:40:59.740357995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 21:40:59.750248  406531 docker.go:295] overlay module found
	I1025 21:40:59.786979  406531 out.go:97] Using the docker driver based on existing profile
	I1025 21:40:59.787014  406531 start.go:298] selected driver: docker
	I1025 21:40:59.787021  406531 start.go:902] validating driver "docker" against &{Name:download-only-836857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-836857 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:40:59.787218  406531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:40:59.861308  406531 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-25 21:40:59.850984735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 21:40:59.861705  406531 cni.go:84] Creating CNI manager for ""
	I1025 21:40:59.861722  406531 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1025 21:40:59.861736  406531 start_flags.go:323] config:
	{Name:download-only-836857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-836857 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInter
val:1m0s GPUs:}
	I1025 21:40:59.882096  406531 out.go:97] Starting control plane node download-only-836857 in cluster download-only-836857
	I1025 21:40:59.882125  406531 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1025 21:40:59.925546  406531 out.go:97] Pulling base image ...
	I1025 21:40:59.925587  406531 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1025 21:40:59.925663  406531 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 21:40:59.945149  406531 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 21:40:59.945296  406531 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1025 21:40:59.945317  406531 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1025 21:40:59.945323  406531 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1025 21:40:59.945335  406531 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1025 21:40:59.984793  406531 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4
	I1025 21:40:59.984822  406531 cache.go:56] Caching tarball of preloaded images
	I1025 21:40:59.984977  406531 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1025 21:41:00.027028  406531 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1025 21:41:00.027063  406531 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4 ...
	I1025 21:41:00.148317  406531 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:bef3312f8cc1e9e2e6a78bd8b3d269c4 -> /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4
	I1025 21:41:08.288895  406531 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4 ...
	I1025 21:41:08.289000  406531 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17488-401064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4 ...
	I1025 21:41:09.204354  406531 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on containerd
	I1025 21:41:09.204503  406531 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/download-only-836857/config.json ...
	I1025 21:41:09.204716  406531 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1025 21:41:09.204907  406531 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17488-401064/.minikube/cache/linux/arm64/v1.28.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-836857"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-836857
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-927332 --alsologtostderr --binary-mirror http://127.0.0.1:33477 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-927332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-927332
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-624750
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-624750: exit status 85 (88.286594ms)

                                                
                                                
-- stdout --
	* Profile "addons-624750" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-624750"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-624750
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-624750: exit status 85 (93.917478ms)

                                                
                                                
-- stdout --
	* Profile "addons-624750" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-624750"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (142.24s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-624750 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-624750 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m22.240498871s)
--- PASS: TestAddons/Setup (142.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 51.491287ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-22k55" [774a42c5-49ca-495e-9cf7-14e3567306e2] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.033908558s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-th8cc" [d49467c4-7722-4c28-8665-5fcdef655317] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014263963s
addons_test.go:339: (dbg) Run:  kubectl --context addons-624750 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-624750 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-624750 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.63669939s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-624750 ip
2023/10/25 21:43:48 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-624750 addons disable registry --alsologtostderr -v=1
addons_test.go:387: (dbg) Done: out/minikube-linux-arm64 -p addons-624750 addons disable registry --alsologtostderr -v=1: (1.078744984s)
--- PASS: TestAddons/parallel/Registry (15.10s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.13s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8tbm5" [beac6ca5-549f-45fe-9df2-7baa7c60c922] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012405366s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-624750
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-624750: (6.119210472s)
--- PASS: TestAddons/parallel/InspektorGadget (11.13s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 4.821116ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-wvkqn" [e2f2b1ac-6a11-4ff9-9bc2-c1a2fb53e0d0] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014482111s
addons_test.go:414: (dbg) Run:  kubectl --context addons-624750 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-624750 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 5.087134ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-624750 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-624750 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0d08d521-cd5c-4b2f-bb21-237cefde2fd9] Pending
helpers_test.go:344: "task-pv-pod" [0d08d521-cd5c-4b2f-bb21-237cefde2fd9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0d08d521-cd5c-4b2f-bb21-237cefde2fd9] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.023076773s
addons_test.go:583: (dbg) Run:  kubectl --context addons-624750 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-624750 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-624750 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-624750 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-624750 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-624750 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-624750 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2fa0611f-9b61-4e8e-9f8b-a9532d2f351a] Pending
helpers_test.go:344: "task-pv-pod-restore" [2fa0611f-9b61-4e8e-9f8b-a9532d2f351a] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.013754771s
addons_test.go:625: (dbg) Run:  kubectl --context addons-624750 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-624750 delete pod task-pv-pod-restore: (1.040048824s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-624750 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-624750 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-624750 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-624750 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.933334975s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-624750 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:641: (dbg) Done: out/minikube-linux-arm64 -p addons-624750 addons disable volumesnapshots --alsologtostderr -v=1: (1.08461002s)
--- PASS: TestAddons/parallel/CSI (45.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-624750 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-624750 --alsologtostderr -v=1: (1.33181183s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-gf5sk" [42bfc63a-be6c-4687-89bb-d8baed4c3841] Pending
helpers_test.go:344: "headlamp-94b766c-gf5sk" [42bfc63a-be6c-4687-89bb-d8baed4c3841] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-gf5sk" [42bfc63a-be6c-4687-89bb-d8baed4c3841] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.039911405s
--- PASS: TestAddons/parallel/Headlamp (11.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-gnb47" [89941681-09ec-407a-92b7-05e842c94742] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010698924s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-624750
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.98s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-624750 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-624750 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624750 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [20f1e219-db2b-47d0-836a-62af7f6c3809] Pending
helpers_test.go:344: "test-local-path" [20f1e219-db2b-47d0-836a-62af7f6c3809] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [20f1e219-db2b-47d0-836a-62af7f6c3809] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [20f1e219-db2b-47d0-836a-62af7f6c3809] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.010859867s
addons_test.go:890: (dbg) Run:  kubectl --context addons-624750 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-624750 ssh "cat /opt/local-path-provisioner/pvc-be298c39-cf02-4c48-8430-ade38dd1c543_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-624750 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-624750 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-624750 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-624750 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.630311081s)
--- PASS: TestAddons/parallel/LocalPath (52.98s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rljkw" [dff4c924-1e4d-4071-b0b1-2306f0328865] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.078171482s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-624750
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-624750 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-624750 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-624750
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-624750: (12.093767149s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-624750
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-624750
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-624750
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestCertOptions (41.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-310756 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-310756 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (38.906004871s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-310756 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-310756 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-310756 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-310756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-310756
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-310756: (2.05013665s)
--- PASS: TestCertOptions (41.66s)

                                                
                                    
x
+
TestCertExpiration (237.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-518397 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-518397 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (41.571925446s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-518397 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-518397 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.157567931s)
helpers_test.go:175: Cleaning up "cert-expiration-518397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-518397
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-518397: (6.444184897s)
--- PASS: TestCertExpiration (237.18s)

                                                
                                    
x
+
TestForceSystemdFlag (39.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-911794 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-911794 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.402806858s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-911794 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-911794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-911794
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-911794: (2.273282991s)
--- PASS: TestForceSystemdFlag (39.08s)

                                                
                                    
x
+
TestForceSystemdEnv (41.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-143524 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1025 22:18:34.259618  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-143524 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.467545858s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-143524 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-143524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-143524
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-143524: (2.192745033s)
--- PASS: TestForceSystemdEnv (41.10s)

                                                
                                    
x
+
TestDockerEnvContainerd (49.93s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-946472 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-946472 --driver=docker  --container-runtime=containerd: (33.82187413s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-946472"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-946472": (1.406388142s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4YUwsvcx1L7T/agent.424120" SSH_AGENT_PID="424123" DOCKER_HOST=ssh://docker@127.0.0.1:33108 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4YUwsvcx1L7T/agent.424120" SSH_AGENT_PID="424123" DOCKER_HOST=ssh://docker@127.0.0.1:33108 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4YUwsvcx1L7T/agent.424120" SSH_AGENT_PID="424123" DOCKER_HOST=ssh://docker@127.0.0.1:33108 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.395885756s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4YUwsvcx1L7T/agent.424120" SSH_AGENT_PID="424123" DOCKER_HOST=ssh://docker@127.0.0.1:33108 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-946472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-946472
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-946472: (2.078365806s)
--- PASS: TestDockerEnvContainerd (49.93s)

                                                
                                    
x
+
TestErrorSpam/setup (30.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-436231 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-436231 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-436231 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-436231 --driver=docker  --container-runtime=containerd: (30.087559046s)
--- PASS: TestErrorSpam/setup (30.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 start --dry-run
--- PASS: TestErrorSpam/start (0.90s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 pause
--- PASS: TestErrorSpam/pause (1.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.97s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 unpause
--- PASS: TestErrorSpam/unpause (1.97s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 stop: (1.280695943s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436231 --log_dir /tmp/nospam-436231 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17488-401064/.minikube/files/etc/test/nested/copy/406453/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.05s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-934322 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1025 21:48:34.258985  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:48:34.264752  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:48:34.275075  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:48:34.295342  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:48:34.335618  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:48:34.415898  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:48:34.576351  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:48:34.896920  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:48:35.537262  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:48:36.817787  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:48:39.377931  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:48:44.498926  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:48:54.739476  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-934322 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m21.047547899s)
--- PASS: TestFunctional/serial/StartWithProxy (81.05s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-934322 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-934322 --alsologtostderr -v=8: (7.009236245s)
functional_test.go:659: soft start took 7.009759127s for "functional-934322" cluster.
--- PASS: TestFunctional/serial/SoftStart (7.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-934322 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-934322 cache add registry.k8s.io/pause:3.1: (1.383993676s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-934322 cache add registry.k8s.io/pause:3.3: (1.344485801s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-934322 cache add registry.k8s.io/pause:latest: (1.194098381s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-934322 /tmp/TestFunctionalserialCacheCmdcacheadd_local4211429026/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 cache add minikube-local-cache-test:functional-934322
functional_test.go:1085: (dbg) Done: out/minikube-linux-arm64 -p functional-934322 cache add minikube-local-cache-test:functional-934322: (1.126441194s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 cache delete minikube-local-cache-test:functional-934322
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-934322
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-934322 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (338.430889ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-934322 cache reload: (1.295565468s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 kubectl -- --context functional-934322 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-934322 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-934322 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 21:49:15.219693  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:49:56.180190  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-934322 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.83480005s)
functional_test.go:757: restart took 44.834901538s for "functional-934322" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-934322 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-934322 logs: (1.787719541s)
--- PASS: TestFunctional/serial/LogsCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 logs --file /tmp/TestFunctionalserialLogsFileCmd513560560/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-934322 logs --file /tmp/TestFunctionalserialLogsFileCmd513560560/001/logs.txt: (2.176354309s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.18s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-934322 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-934322
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-934322: exit status 115 (637.505345ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32282 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-934322 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-934322 delete -f testdata/invalidsvc.yaml: (1.037713933s)
--- PASS: TestFunctional/serial/InvalidService (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-934322 config get cpus: exit status 14 (111.497079ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-934322 config get cpus: exit status 14 (93.127188ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-934322 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-934322 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 439020: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-934322 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-934322 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (233.746606ms)

                                                
                                                
-- stdout --
	* [functional-934322] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:50:51.770839  438424 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:50:51.771079  438424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:50:51.771086  438424 out.go:309] Setting ErrFile to fd 2...
	I1025 21:50:51.771091  438424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:50:51.771422  438424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
	I1025 21:50:51.771883  438424 out.go:303] Setting JSON to false
	I1025 21:50:51.773203  438424 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5589,"bootTime":1698265063,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 21:50:51.773286  438424 start.go:138] virtualization:  
	I1025 21:50:51.775966  438424 out.go:177] * [functional-934322] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1025 21:50:51.777585  438424 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:50:51.779082  438424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:50:51.777702  438424 notify.go:220] Checking for updates...
	I1025 21:50:51.783461  438424 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	I1025 21:50:51.785748  438424 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	I1025 21:50:51.787549  438424 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 21:50:51.789500  438424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:50:51.792272  438424 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1025 21:50:51.792805  438424 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:50:51.826538  438424 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:50:51.826667  438424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:50:51.923518  438424 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-25 21:50:51.912638341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 21:50:51.923623  438424 docker.go:295] overlay module found
	I1025 21:50:51.925884  438424 out.go:177] * Using the docker driver based on existing profile
	I1025 21:50:51.927865  438424 start.go:298] selected driver: docker
	I1025 21:50:51.927884  438424 start.go:902] validating driver "docker" against &{Name:functional-934322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-934322 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:50:51.927997  438424 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:50:51.930479  438424 out.go:177] 
	W1025 21:50:51.932327  438424 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 21:50:51.934072  438424 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-934322 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-934322 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-934322 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (327.082551ms)

                                                
                                                
-- stdout --
	* [functional-934322] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:50:52.481361  438550 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:50:52.481648  438550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:50:52.481658  438550 out.go:309] Setting ErrFile to fd 2...
	I1025 21:50:52.481664  438550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:50:52.482033  438550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
	I1025 21:50:52.482438  438550 out.go:303] Setting JSON to false
	I1025 21:50:52.483555  438550 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5590,"bootTime":1698265063,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 21:50:52.483631  438550 start.go:138] virtualization:  
	I1025 21:50:52.486432  438550 out.go:177] * [functional-934322] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I1025 21:50:52.489269  438550 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:50:52.491338  438550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:50:52.489400  438550 notify.go:220] Checking for updates...
	I1025 21:50:52.495868  438550 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	I1025 21:50:52.498172  438550 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	I1025 21:50:52.500341  438550 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 21:50:52.502755  438550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:50:52.507422  438550 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1025 21:50:52.508187  438550 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:50:52.547348  438550 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 21:50:52.547458  438550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 21:50:52.675145  438550 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-25 21:50:52.664790257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 21:50:52.675251  438550 docker.go:295] overlay module found
	I1025 21:50:52.677423  438550 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1025 21:50:52.679423  438550 start.go:298] selected driver: docker
	I1025 21:50:52.679440  438550 start.go:902] validating driver "docker" against &{Name:functional-934322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-934322 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:50:52.679528  438550 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:50:52.682268  438550 out.go:177] 
	W1025 21:50:52.684391  438550 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 21:50:52.686541  438550 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-934322 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-934322 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-jb5jr" [424918be-aa4c-41d7-b9c6-80852861365a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-jb5jr" [424918be-aa4c-41d7-b9c6-80852861365a] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.014486102s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31006
functional_test.go:1674: http://192.168.49.2:31006: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-jb5jr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31006
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [948143d7-9e85-45af-b857-2d67f3bf6970] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013817485s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-934322 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-934322 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-934322 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-934322 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [96d0f565-d4a2-4c94-bf7c-ebccb15365b7] Pending
helpers_test.go:344: "sp-pod" [96d0f565-d4a2-4c94-bf7c-ebccb15365b7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [96d0f565-d4a2-4c94-bf7c-ebccb15365b7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.021183671s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-934322 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-934322 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-934322 delete -f testdata/storage-provisioner/pod.yaml: (1.179020105s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-934322 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bac26626-6637-4a03-a6b0-18ba8c3a1d19] Pending
helpers_test.go:344: "sp-pod" [bac26626-6637-4a03-a6b0-18ba8c3a1d19] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bac26626-6637-4a03-a6b0-18ba8c3a1d19] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.024827913s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-934322 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh -n functional-934322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 cp functional-934322:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1758835561/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh -n functional-934322 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/406453/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "sudo cat /etc/test/nested/copy/406453/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/406453.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "sudo cat /etc/ssl/certs/406453.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/406453.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "sudo cat /usr/share/ca-certificates/406453.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/4064532.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "sudo cat /etc/ssl/certs/4064532.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/4064532.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "sudo cat /usr/share/ca-certificates/4064532.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-934322 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-934322 ssh "sudo systemctl is-active docker": exit status 1 (460.64839ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-934322 ssh "sudo systemctl is-active crio": exit status 1 (359.278911ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-934322 version -o=json --components: (1.479926282s)
--- PASS: TestFunctional/parallel/Version/components (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-934322 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-934322
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-934322 image ls --format short --alsologtostderr:
I1025 21:51:01.197433  439909 out.go:296] Setting OutFile to fd 1 ...
I1025 21:51:01.197697  439909 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:51:01.197704  439909 out.go:309] Setting ErrFile to fd 2...
I1025 21:51:01.197709  439909 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:51:01.197954  439909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
I1025 21:51:01.198682  439909 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1025 21:51:01.198838  439909 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1025 21:51:01.199326  439909 cli_runner.go:164] Run: docker container inspect functional-934322 --format={{.State.Status}}
I1025 21:51:01.219924  439909 ssh_runner.go:195] Run: systemctl --version
I1025 21:51:01.219995  439909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-934322
I1025 21:51:01.241251  439909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/functional-934322/id_rsa Username:docker}
I1025 21:51:01.343736  439909 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-934322 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| localhost/my-image                          | functional-934322  | sha256:318bcf | 831kB  |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| docker.io/library/nginx                     | latest             | sha256:97930d | 67.2MB |
| registry.k8s.io/kube-proxy                  | v1.28.3            | sha256:a5dd5c | 22MB   |
| registry.k8s.io/kube-scheduler              | v1.28.3            | sha256:42a4e7 | 17.1MB |
| docker.io/library/nginx                     | alpine             | sha256:aae348 | 19.6MB |
| docker.io/library/minikube-local-cache-test | functional-934322  | sha256:2dbd5c | 1.01kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-apiserver              | v1.28.3            | sha256:537e9a | 31.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.3            | sha256:827643 | 30.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-934322 image ls --format table --alsologtostderr:
I1025 21:51:05.713119  440258 out.go:296] Setting OutFile to fd 1 ...
I1025 21:51:05.713341  440258 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:51:05.713372  440258 out.go:309] Setting ErrFile to fd 2...
I1025 21:51:05.713394  440258 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:51:05.713737  440258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
I1025 21:51:05.714436  440258 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1025 21:51:05.714622  440258 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1025 21:51:05.715241  440258 cli_runner.go:164] Run: docker container inspect functional-934322 --format={{.State.Status}}
I1025 21:51:05.735218  440258 ssh_runner.go:195] Run: systemctl --version
I1025 21:51:05.735275  440258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-934322
I1025 21:51:05.755611  440258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/functional-934322/id_rsa Username:docker}
I1025 21:51:05.855033  440258 ssh_runner.go:195] Run: sudo crictl images --output json
2023/10/25 21:51:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-934322 image ls --format json --alsologtostderr:
[{"id":"sha256:2dbd5cb86741590371b587903841f2774dabdeec958418ee4eb853c8955b036b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-934322"],"size":"1007"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"17063462"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:97930d6f4eecda673e2f3d7ec2983bce00b353792d1a9044b6477a3c51fcb185","repoDigests":["docker.io/library/nginx@sha256:add4792d930c25dd2abf2ef9ea79
de578097a1c175a16ab25814332fe33622de"],"repoTags":["docker.io/library/nginx:latest"],"size":"67241716"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:318bcf0a14674d9d5ef5ae34c4b4bbe810124aa88e84b6cff08dc2b16dd6ae3f","repoDigests":[],"repoTags":["localhost/my-image:functional-934322"],"size":"830634"},{"id":"sha256:8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"30344361"},{"id":"sha256:a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","repoDigests":["registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a
6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"21981421"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.
io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"31557550"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:829e9de33
8bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b","repoDigests":["docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19561536"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-934322 image ls --format json --alsologtostderr:
I1025 21:51:05.456799  440232 out.go:296] Setting OutFile to fd 1 ...
I1025 21:51:05.457006  440232 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:51:05.457015  440232 out.go:309] Setting ErrFile to fd 2...
I1025 21:51:05.457021  440232 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:51:05.457314  440232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
I1025 21:51:05.457988  440232 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1025 21:51:05.458129  440232 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1025 21:51:05.458628  440232 cli_runner.go:164] Run: docker container inspect functional-934322 --format={{.State.Status}}
I1025 21:51:05.479121  440232 ssh_runner.go:195] Run: systemctl --version
I1025 21:51:05.479186  440232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-934322
I1025 21:51:05.499008  440232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/functional-934322/id_rsa Username:docker}
I1025 21:51:05.595170  440232 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-934322 image ls --format yaml --alsologtostderr:
- id: sha256:2dbd5cb86741590371b587903841f2774dabdeec958418ee4eb853c8955b036b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-934322
size: "1007"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "31557550"
- id: sha256:a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "21981421"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "30344361"
- id: sha256:42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "17063462"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b
repoDigests:
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "19561536"
- id: sha256:97930d6f4eecda673e2f3d7ec2983bce00b353792d1a9044b6477a3c51fcb185
repoDigests:
- docker.io/library/nginx@sha256:add4792d930c25dd2abf2ef9ea79de578097a1c175a16ab25814332fe33622de
repoTags:
- docker.io/library/nginx:latest
size: "67241716"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-934322 image ls --format yaml --alsologtostderr:
I1025 21:51:01.506611  439937 out.go:296] Setting OutFile to fd 1 ...
I1025 21:51:01.506888  439937 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:51:01.506914  439937 out.go:309] Setting ErrFile to fd 2...
I1025 21:51:01.506933  439937 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:51:01.507490  439937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
I1025 21:51:01.508400  439937 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1025 21:51:01.508602  439937 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1025 21:51:01.509383  439937 cli_runner.go:164] Run: docker container inspect functional-934322 --format={{.State.Status}}
I1025 21:51:01.540482  439937 ssh_runner.go:195] Run: systemctl --version
I1025 21:51:01.540541  439937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-934322
I1025 21:51:01.562317  439937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/functional-934322/id_rsa Username:docker}
I1025 21:51:01.659662  439937 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-934322 ssh pgrep buildkitd: exit status 1 (454.229476ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image build -t localhost/my-image:functional-934322 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-934322 image build -t localhost/my-image:functional-934322 testdata/build --alsologtostderr: (2.937549418s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-934322 image build -t localhost/my-image:functional-934322 testdata/build --alsologtostderr:
I1025 21:51:02.280685  440014 out.go:296] Setting OutFile to fd 1 ...
I1025 21:51:02.281431  440014 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:51:02.281473  440014 out.go:309] Setting ErrFile to fd 2...
I1025 21:51:02.281496  440014 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:51:02.281920  440014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
I1025 21:51:02.282916  440014 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1025 21:51:02.283964  440014 config.go:182] Loaded profile config "functional-934322": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1025 21:51:02.284793  440014 cli_runner.go:164] Run: docker container inspect functional-934322 --format={{.State.Status}}
I1025 21:51:02.319671  440014 ssh_runner.go:195] Run: systemctl --version
I1025 21:51:02.319738  440014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-934322
I1025 21:51:02.356747  440014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/functional-934322/id_rsa Username:docker}
I1025 21:51:02.455496  440014 build_images.go:151] Building image from path: /tmp/build.738526261.tar
I1025 21:51:02.455576  440014 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 21:51:02.469976  440014 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.738526261.tar
I1025 21:51:02.477669  440014 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.738526261.tar: stat -c "%s %y" /var/lib/minikube/build/build.738526261.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.738526261.tar': No such file or directory
I1025 21:51:02.477698  440014 ssh_runner.go:362] scp /tmp/build.738526261.tar --> /var/lib/minikube/build/build.738526261.tar (3072 bytes)
I1025 21:51:02.511529  440014 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.738526261
I1025 21:51:02.529706  440014 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.738526261 -xf /var/lib/minikube/build/build.738526261.tar
I1025 21:51:02.543604  440014 containerd.go:378] Building image: /var/lib/minikube/build/build.738526261
I1025 21:51:02.543693  440014 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.738526261 --local dockerfile=/var/lib/minikube/build/build.738526261 --output type=image,name=localhost/my-image:functional-934322
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.2s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:148298f9b2fd233d3e78a772dcb680cbb15072d8ad1323eaae04eb71a2ed24f3 0.0s done
#8 exporting config sha256:318bcf0a14674d9d5ef5ae34c4b4bbe810124aa88e84b6cff08dc2b16dd6ae3f 0.0s done
#8 naming to localhost/my-image:functional-934322 done
#8 DONE 0.1s
I1025 21:51:05.088355  440014 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.738526261 --local dockerfile=/var/lib/minikube/build/build.738526261 --output type=image,name=localhost/my-image:functional-934322: (2.544627892s)
I1025 21:51:05.088439  440014 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.738526261
I1025 21:51:05.101999  440014 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.738526261.tar
I1025 21:51:05.113692  440014 build_images.go:207] Built localhost/my-image:functional-934322 from /tmp/build.738526261.tar
I1025 21:51:05.113724  440014 build_images.go:123] succeeded building to: functional-934322
I1025 21:51:05.113728  440014 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.443492963s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-934322
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-934322 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-934322 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-ctm7s" [b6a4ac6e-2b69-456e-8a79-96592354d06d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-ctm7s" [b6a4ac6e-2b69-456e-8a79-96592354d06d] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.078775588s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 service list -o json
functional_test.go:1493: Took "530.580686ms" to run "out/minikube-linux-arm64 -p functional-934322 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32489
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32489
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image rm gcr.io/google-containers/addon-resizer:functional-934322 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-934322
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 image save --daemon gcr.io/google-containers/addon-resizer:functional-934322 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-934322
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-934322 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-934322 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-934322 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 436580: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-934322 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-934322 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-934322 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7396e803-8b75-4f83-b848-b6bda0eeffc7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7396e803-8b75-4f83-b848-b6bda0eeffc7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.02339979s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-934322 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.166.181 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-934322 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "470.757779ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "81.352616ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "371.653422ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "76.187206ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-934322 /tmp/TestFunctionalparallelMountCmdany-port64797601/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698270646118492628" to /tmp/TestFunctionalparallelMountCmdany-port64797601/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698270646118492628" to /tmp/TestFunctionalparallelMountCmdany-port64797601/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698270646118492628" to /tmp/TestFunctionalparallelMountCmdany-port64797601/001/test-1698270646118492628
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-934322 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (411.513205ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 21:50 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 21:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 21:50 test-1698270646118492628
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh cat /mount-9p/test-1698270646118492628
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-934322 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ac6ee861-0105-4c38-b946-95cbfa77e0cd] Pending
helpers_test.go:344: "busybox-mount" [ac6ee861-0105-4c38-b946-95cbfa77e0cd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ac6ee861-0105-4c38-b946-95cbfa77e0cd] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ac6ee861-0105-4c38-b946-95cbfa77e0cd] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.017034207s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-934322 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-934322 /tmp/TestFunctionalparallelMountCmdany-port64797601/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-934322 /tmp/TestFunctionalparallelMountCmdspecific-port739508661/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-934322 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (664.694761ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-934322 /tmp/TestFunctionalparallelMountCmdspecific-port739508661/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-934322 ssh "sudo umount -f /mount-9p": exit status 1 (322.29113ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-934322 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-934322 /tmp/TestFunctionalparallelMountCmdspecific-port739508661/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-934322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3634719173/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-934322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3634719173/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-934322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3634719173/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-934322 ssh "findmnt -T" /mount1: exit status 1 (889.610047ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-934322 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-934322 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-934322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3634719173/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-934322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3634719173/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-934322 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3634719173/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.73s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-934322
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-934322
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-934322
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (92.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-356915 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1025 21:51:18.101596  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-356915 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m32.571477251s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (92.57s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-356915 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-356915 addons enable ingress --alsologtostderr -v=5: (10.520500045s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-356915 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-787308 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1025 21:54:01.942636  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-787308 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m27.003516106s)
--- PASS: TestJSONOutput/start/Command (87.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-787308 --output=json --user=testUser
E1025 21:55:12.165496  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 21:55:12.170734  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 21:55:12.181048  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 21:55:12.201457  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 21:55:12.241730  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 21:55:12.321924  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 21:55:12.482247  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-787308 --output=json --user=testUser
E1025 21:55:12.803256  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
--- PASS: TestJSONOutput/unpause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-787308 --output=json --user=testUser
E1025 21:55:13.443942  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 21:55:14.724458  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 21:55:17.285390  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-787308 --output=json --user=testUser: (5.800265168s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-664175 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-664175 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.208598ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1683a845-4278-4e1c-ab01-a8710a2dd56e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-664175] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c15360af-7802-4540-a513-e10caf0ce391","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17488"}}
	{"specversion":"1.0","id":"8c4bedb7-9a9a-452a-ae8f-239677926c6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0759cc2d-481f-4fcd-938a-24cbbcbc09d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig"}}
	{"specversion":"1.0","id":"609ef483-80a7-4eca-9cb1-ec03db18a0ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube"}}
	{"specversion":"1.0","id":"8e10eb0b-cdca-4bad-a228-6b52d79512c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bc5b69a6-1b02-43d7-a295-e78bf27fe5b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1b82ebc5-0eea-43d4-9df6-2f08cfd9a8f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-664175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-664175
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-917919 --network=
E1025 21:55:32.645815  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 21:55:53.126110  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-917919 --network=: (40.852842661s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-917919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-917919
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-917919: (2.202717487s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.08s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-637341 --network=bridge
E1025 21:56:34.087187  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-637341 --network=bridge: (31.383675026s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-637341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-637341
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-637341: (2.063302387s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.47s)

                                                
                                    
x
+
TestKicExistingNetwork (35.8s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-666356 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-666356 --network=existing-network: (33.555163893s)
helpers_test.go:175: Cleaning up "existing-network-666356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-666356
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-666356: (2.083880284s)
--- PASS: TestKicExistingNetwork (35.80s)

                                                
                                    
x
+
TestKicCustomSubnet (35.77s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-675514 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-675514 --subnet=192.168.60.0/24: (33.653122588s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-675514 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-675514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-675514
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-675514: (2.093687239s)
--- PASS: TestKicCustomSubnet (35.77s)

                                                
                                    
x
+
TestKicStaticIP (34.8s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-114865 --static-ip=192.168.200.200
E1025 21:57:53.634235  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 21:57:53.639493  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 21:57:53.649822  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 21:57:53.670078  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 21:57:53.710349  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 21:57:53.790691  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 21:57:53.951537  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 21:57:54.272583  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 21:57:54.913456  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 21:57:56.007745  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 21:57:56.194505  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 21:57:58.754918  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 21:58:03.875124  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 21:58:14.115816  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-114865 --static-ip=192.168.200.200: (32.520073461s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-114865 ip
helpers_test.go:175: Cleaning up "static-ip-114865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-114865
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-114865: (2.096141307s)
--- PASS: TestKicStaticIP (34.80s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (68.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-514601 --driver=docker  --container-runtime=containerd
E1025 21:58:34.259126  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 21:58:34.596587  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-514601 --driver=docker  --container-runtime=containerd: (30.361587437s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-517194 --driver=docker  --container-runtime=containerd
E1025 21:59:15.557176  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-517194 --driver=docker  --container-runtime=containerd: (32.259064622s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-514601
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-517194
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-517194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-517194
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-517194: (2.072098687s)
helpers_test.go:175: Cleaning up "first-514601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-514601
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-514601: (2.284569603s)
--- PASS: TestMinikubeProfile (68.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-652672 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-652672 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.430443727s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-652672 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-656742 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-656742 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.555940209s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-656742 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-652672 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-652672 --alsologtostderr -v=5: (1.69965923s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-656742 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-656742
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-656742: (1.234135304s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.66s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-656742
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-656742: (6.663962171s)
--- PASS: TestMountStart/serial/RestartStopped (7.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-656742 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-318283 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1025 22:00:12.165898  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 22:00:37.478069  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 22:00:39.848200  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-318283 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m42.773718788s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-318283 -- rollout status deployment/busybox: (2.745066587s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- exec busybox-5bc68d56bd-vgcpn -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- exec busybox-5bc68d56bd-wn4hs -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- exec busybox-5bc68d56bd-vgcpn -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- exec busybox-5bc68d56bd-wn4hs -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- exec busybox-5bc68d56bd-vgcpn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- exec busybox-5bc68d56bd-wn4hs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- exec busybox-5bc68d56bd-vgcpn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- exec busybox-5bc68d56bd-vgcpn -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- exec busybox-5bc68d56bd-wn4hs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-318283 -- exec busybox-5bc68d56bd-wn4hs -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.21s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-318283 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-318283 -v 3 --alsologtostderr: (16.819272953s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.58s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 cp testdata/cp-test.txt multinode-318283:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 cp multinode-318283:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile530303001/001/cp-test_multinode-318283.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 cp multinode-318283:/home/docker/cp-test.txt multinode-318283-m02:/home/docker/cp-test_multinode-318283_multinode-318283-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283-m02 "sudo cat /home/docker/cp-test_multinode-318283_multinode-318283-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 cp multinode-318283:/home/docker/cp-test.txt multinode-318283-m03:/home/docker/cp-test_multinode-318283_multinode-318283-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283-m03 "sudo cat /home/docker/cp-test_multinode-318283_multinode-318283-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 cp testdata/cp-test.txt multinode-318283-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 cp multinode-318283-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile530303001/001/cp-test_multinode-318283-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 cp multinode-318283-m02:/home/docker/cp-test.txt multinode-318283:/home/docker/cp-test_multinode-318283-m02_multinode-318283.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283 "sudo cat /home/docker/cp-test_multinode-318283-m02_multinode-318283.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 cp multinode-318283-m02:/home/docker/cp-test.txt multinode-318283-m03:/home/docker/cp-test_multinode-318283-m02_multinode-318283-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283-m03 "sudo cat /home/docker/cp-test_multinode-318283-m02_multinode-318283-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 cp testdata/cp-test.txt multinode-318283-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 cp multinode-318283-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile530303001/001/cp-test_multinode-318283-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 cp multinode-318283-m03:/home/docker/cp-test.txt multinode-318283:/home/docker/cp-test_multinode-318283-m03_multinode-318283.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283 "sudo cat /home/docker/cp-test_multinode-318283-m03_multinode-318283.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 cp multinode-318283-m03:/home/docker/cp-test.txt multinode-318283-m02:/home/docker/cp-test_multinode-318283-m03_multinode-318283-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 ssh -n multinode-318283-m02 "sudo cat /home/docker/cp-test_multinode-318283-m03_multinode-318283-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-318283 node stop m03: (1.249763034s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-318283 status: exit status 7 (596.441795ms)

                                                
                                                
-- stdout --
	multinode-318283
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-318283-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-318283-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-318283 status --alsologtostderr: exit status 7 (578.848272ms)

                                                
                                                
-- stdout --
	multinode-318283
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-318283-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-318283-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 22:02:23.432215  487640 out.go:296] Setting OutFile to fd 1 ...
	I1025 22:02:23.432416  487640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:02:23.432478  487640 out.go:309] Setting ErrFile to fd 2...
	I1025 22:02:23.432501  487640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:02:23.432824  487640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
	I1025 22:02:23.433164  487640 out.go:303] Setting JSON to false
	I1025 22:02:23.433255  487640 mustload.go:65] Loading cluster: multinode-318283
	I1025 22:02:23.433325  487640 notify.go:220] Checking for updates...
	I1025 22:02:23.433746  487640 config.go:182] Loaded profile config "multinode-318283": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1025 22:02:23.433782  487640 status.go:255] checking status of multinode-318283 ...
	I1025 22:02:23.435543  487640 cli_runner.go:164] Run: docker container inspect multinode-318283 --format={{.State.Status}}
	I1025 22:02:23.454010  487640 status.go:330] multinode-318283 host status = "Running" (err=<nil>)
	I1025 22:02:23.454033  487640 host.go:66] Checking if "multinode-318283" exists ...
	I1025 22:02:23.454369  487640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-318283
	I1025 22:02:23.474679  487640 host.go:66] Checking if "multinode-318283" exists ...
	I1025 22:02:23.474993  487640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 22:02:23.475038  487640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-318283
	I1025 22:02:23.503665  487640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/multinode-318283/id_rsa Username:docker}
	I1025 22:02:23.603942  487640 ssh_runner.go:195] Run: systemctl --version
	I1025 22:02:23.609595  487640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:02:23.623143  487640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 22:02:23.693711  487640 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-25 22:02:23.683947446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 22:02:23.694329  487640 kubeconfig.go:92] found "multinode-318283" server: "https://192.168.58.2:8443"
	I1025 22:02:23.694367  487640 api_server.go:166] Checking apiserver status ...
	I1025 22:02:23.694417  487640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:02:23.707663  487640 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1295/cgroup
	I1025 22:02:23.719213  487640 api_server.go:182] apiserver freezer: "3:freezer:/docker/243233fab1f4e7fe43466d10b54bce9a10493d2f5ea60a4a67ef1663cddeb6a3/kubepods/burstable/pod3764bb43da7dda4cfd3626ba0b7b7498/aa68c75cea2008fc0a602108c164f5bdef464544a7a7b794855ce1056718d4eb"
	I1025 22:02:23.719285  487640 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/243233fab1f4e7fe43466d10b54bce9a10493d2f5ea60a4a67ef1663cddeb6a3/kubepods/burstable/pod3764bb43da7dda4cfd3626ba0b7b7498/aa68c75cea2008fc0a602108c164f5bdef464544a7a7b794855ce1056718d4eb/freezer.state
	I1025 22:02:23.729992  487640 api_server.go:204] freezer state: "THAWED"
	I1025 22:02:23.730020  487640 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1025 22:02:23.738681  487640 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1025 22:02:23.738710  487640 status.go:421] multinode-318283 apiserver status = Running (err=<nil>)
	I1025 22:02:23.738721  487640 status.go:257] multinode-318283 status: &{Name:multinode-318283 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 22:02:23.738737  487640 status.go:255] checking status of multinode-318283-m02 ...
	I1025 22:02:23.739035  487640 cli_runner.go:164] Run: docker container inspect multinode-318283-m02 --format={{.State.Status}}
	I1025 22:02:23.758279  487640 status.go:330] multinode-318283-m02 host status = "Running" (err=<nil>)
	I1025 22:02:23.758313  487640 host.go:66] Checking if "multinode-318283-m02" exists ...
	I1025 22:02:23.758677  487640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-318283-m02
	I1025 22:02:23.781488  487640 host.go:66] Checking if "multinode-318283-m02" exists ...
	I1025 22:02:23.781867  487640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 22:02:23.781933  487640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-318283-m02
	I1025 22:02:23.802692  487640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/17488-401064/.minikube/machines/multinode-318283-m02/id_rsa Username:docker}
	I1025 22:02:23.899893  487640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:02:23.913601  487640 status.go:257] multinode-318283-m02 status: &{Name:multinode-318283-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 22:02:23.913637  487640 status.go:255] checking status of multinode-318283-m03 ...
	I1025 22:02:23.913969  487640 cli_runner.go:164] Run: docker container inspect multinode-318283-m03 --format={{.State.Status}}
	I1025 22:02:23.936790  487640 status.go:330] multinode-318283-m03 host status = "Stopped" (err=<nil>)
	I1025 22:02:23.936810  487640 status.go:343] host is not running, skipping remaining checks
	I1025 22:02:23.936817  487640 status.go:257] multinode-318283-m03 status: &{Name:multinode-318283-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-318283 node start m03 --alsologtostderr: (11.488403923s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (121.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-318283
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-318283
E1025 22:02:53.633989  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-318283: (25.267636036s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-318283 --wait=true -v=8 --alsologtostderr
E1025 22:03:21.318839  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 22:03:34.258973  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-318283 --wait=true -v=8 --alsologtostderr: (1m35.775845091s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-318283
--- PASS: TestMultiNode/serial/RestartKeepsNodes (121.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-318283 node delete m03: (4.53821881s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 stop
E1025 22:04:57.302896  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-318283 stop: (23.92429073s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-318283 status: exit status 7 (120.937082ms)

                                                
                                                
-- stdout --
	multinode-318283
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-318283-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-318283 status --alsologtostderr: exit status 7 (117.087811ms)

                                                
                                                
-- stdout --
	multinode-318283
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-318283-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 22:05:07.052331  496207 out.go:296] Setting OutFile to fd 1 ...
	I1025 22:05:07.052463  496207 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:05:07.052470  496207 out.go:309] Setting ErrFile to fd 2...
	I1025 22:05:07.052476  496207 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:05:07.052765  496207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
	I1025 22:05:07.052968  496207 out.go:303] Setting JSON to false
	I1025 22:05:07.053046  496207 mustload.go:65] Loading cluster: multinode-318283
	I1025 22:05:07.053127  496207 notify.go:220] Checking for updates...
	I1025 22:05:07.053521  496207 config.go:182] Loaded profile config "multinode-318283": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1025 22:05:07.053534  496207 status.go:255] checking status of multinode-318283 ...
	I1025 22:05:07.054499  496207 cli_runner.go:164] Run: docker container inspect multinode-318283 --format={{.State.Status}}
	I1025 22:05:07.075262  496207 status.go:330] multinode-318283 host status = "Stopped" (err=<nil>)
	I1025 22:05:07.075286  496207 status.go:343] host is not running, skipping remaining checks
	I1025 22:05:07.075293  496207 status.go:257] multinode-318283 status: &{Name:multinode-318283 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 22:05:07.075342  496207 status.go:255] checking status of multinode-318283-m02 ...
	I1025 22:05:07.075645  496207 cli_runner.go:164] Run: docker container inspect multinode-318283-m02 --format={{.State.Status}}
	I1025 22:05:07.095128  496207 status.go:330] multinode-318283-m02 host status = "Stopped" (err=<nil>)
	I1025 22:05:07.095152  496207 status.go:343] host is not running, skipping remaining checks
	I1025 22:05:07.095159  496207 status.go:257] multinode-318283-m02 status: &{Name:multinode-318283-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-318283 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1025 22:05:12.165924  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-318283 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m26.924161186s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-318283 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-318283
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-318283-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-318283-m02 --driver=docker  --container-runtime=containerd: exit status 14 (105.85426ms)

                                                
                                                
-- stdout --
	* [multinode-318283-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-318283-m02' is duplicated with machine name 'multinode-318283-m02' in profile 'multinode-318283'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-318283-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-318283-m03 --driver=docker  --container-runtime=containerd: (33.84069386s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-318283
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-318283: exit status 80 (362.896489ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-318283
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-318283-m03 already exists in multinode-318283-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-318283-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-318283-m03: (2.158394344s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.54s)

                                                
                                    
x
+
TestPreload (140.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-851812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1025 22:07:53.634292  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-851812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m12.527884396s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-851812 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-851812 image pull gcr.io/k8s-minikube/busybox: (1.277355977s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-851812
E1025 22:08:34.258993  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-851812: (12.061570301s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-851812 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-851812 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (52.205804328s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-851812 image list
helpers_test.go:175: Cleaning up "test-preload-851812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-851812
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-851812: (2.440466187s)
--- PASS: TestPreload (140.79s)

                                                
                                    
x
+
TestInsufficientStorage (10.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-990655 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-990655 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.793550272s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c883e644-65bf-4e96-9d8b-54704ca7faca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-990655] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9832639-6a2a-4aba-af1f-badd8c30b031","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17488"}}
	{"specversion":"1.0","id":"d3c4b4a8-0567-491f-b66a-627db76eb8ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"37f4a58a-fd1b-4d11-93ce-38dcfa75e307","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig"}}
	{"specversion":"1.0","id":"db1679e1-f430-4c43-925f-bc5f27b966b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube"}}
	{"specversion":"1.0","id":"49d292d9-9568-4ca5-a0cd-48bfe54019a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0d594380-1048-4b10-9f77-1dcdef7f0728","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"157c05e2-2558-4a5a-abe9-a16e059172f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"254abb99-ba69-48c7-bcab-e5ef6d352fc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4dd5bddf-fe6f-4c5b-a716-64ecc514ba46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f461469f-4c4a-4c04-894d-9f3e729747c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"76f45c3f-adb6-4245-bb5e-fada72192eb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-990655 in cluster insufficient-storage-990655","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a31286e8-f3a5-4b00-beb7-9a1750222e70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b66e528d-0978-44a6-a79c-c663fc962ba5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b919501-c118-403d-8b56-976aea609f10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-990655 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-990655 --output=json --layout=cluster: exit status 7 (347.88221ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-990655","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-990655","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 22:10:23.233979  512983 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-990655" does not appear in /home/jenkins/minikube-integration/17488-401064/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-990655 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-990655 --output=json --layout=cluster: exit status 7 (335.739991ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-990655","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-990655","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 22:10:23.571777  513037 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-990655" does not appear in /home/jenkins/minikube-integration/17488-401064/kubeconfig
	E1025 22:10:23.583883  513037 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/insufficient-storage-990655/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-990655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-990655
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-990655: (1.971677466s)
--- PASS: TestInsufficientStorage (10.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (86.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.2739704145.exe start -p running-upgrade-303507 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.2739704145.exe start -p running-upgrade-303507 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.696856758s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-303507 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-303507 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.120473159s)
helpers_test.go:175: Cleaning up "running-upgrade-303507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-303507
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-303507: (2.87582848s)
--- PASS: TestRunningBinaryUpgrade (86.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (377.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-776977 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-776977 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.749682959s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-776977
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-776977: (1.315944023s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-776977 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-776977 status --format={{.Host}}: exit status 7 (85.649988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-776977 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1025 22:12:53.634083  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-776977 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m44.7758362s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-776977 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-776977 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-776977 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (102.867346ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-776977] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-776977
	    minikube start -p kubernetes-upgrade-776977 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7769772 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-776977 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-776977 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-776977 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.042430415s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-776977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-776977
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-776977: (2.702368244s)
--- PASS: TestKubernetesUpgrade (377.92s)

                                                
                                    
x
+
TestMissingContainerUpgrade (173.7s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.26.0.2184922762.exe start -p missing-upgrade-841542 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.26.0.2184922762.exe start -p missing-upgrade-841542 --memory=2200 --driver=docker  --container-runtime=containerd: (1m29.395278306s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-841542
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-841542
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-841542 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-841542 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m19.528147173s)
helpers_test.go:175: Cleaning up "missing-upgrade-841542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-841542
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-841542: (2.600217033s)
--- PASS: TestMissingContainerUpgrade (173.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-128613 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-128613 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (106.509373ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-128613] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-128613 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-128613 --driver=docker  --container-runtime=containerd: (38.804034562s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-128613 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-128613 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-128613 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.636426287s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-128613 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-128613 status -o json: exit status 2 (358.089521ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-128613","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-128613
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-128613: (1.953769081s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-128613 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-128613 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.122883453s)
--- PASS: TestNoKubernetes/serial/Start (6.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-128613 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-128613 "sudo systemctl is-active --quiet service kubelet": exit status 1 (411.555196ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-128613
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-128613: (1.328299285s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-128613 --driver=docker  --container-runtime=containerd
E1025 22:11:35.208563  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-128613 --driver=docker  --container-runtime=containerd: (7.542232908s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-128613 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-128613 "sudo systemctl is-active --quiet service kubelet": exit status 1 (538.269814ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.3050955726.exe start -p stopped-upgrade-280374 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1025 22:13:34.259039  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.3050955726.exe start -p stopped-upgrade-280374 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.207095491s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.3050955726.exe -p stopped-upgrade-280374 stop
E1025 22:14:16.679111  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.3050955726.exe -p stopped-upgrade-280374 stop: (20.172855217s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-280374 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1025 22:15:12.165672  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-280374 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (46.364684143s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (111.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-280374
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-280374: (1.157267242s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestPause/serial/Start (60.71s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-550982 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-550982 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m0.707030678s)
--- PASS: TestPause/serial/Start (60.71s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.9s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-550982 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-550982 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.874047534s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.90s)

                                                
                                    
x
+
TestPause/serial/Pause (1.34s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-550982 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-550982 --alsologtostderr -v=5: (1.343895514s)
--- PASS: TestPause/serial/Pause (1.34s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-550982 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-550982 --output=json --layout=cluster: exit status 2 (585.156389ms)

                                                
                                                
-- stdout --
	{"Name":"pause-550982","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-550982","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.59s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.24s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-550982 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-550982 --alsologtostderr -v=5: (1.243462102s)
--- PASS: TestPause/serial/Unpause (1.24s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.26s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-550982 --alsologtostderr -v=5
E1025 22:17:53.633387  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-550982 --alsologtostderr -v=5: (1.26207575s)
--- PASS: TestPause/serial/PauseAgain (1.26s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.11s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-550982 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-550982 --alsologtostderr -v=5: (3.113396534s)
--- PASS: TestPause/serial/DeletePaused (3.11s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.73s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-550982
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-550982: exit status 1 (20.047825ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-550982: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-023705 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-023705 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (242.469793ms)

                                                
                                                
-- stdout --
	* [false-023705] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 22:18:07.527910  550023 out.go:296] Setting OutFile to fd 1 ...
	I1025 22:18:07.528152  550023 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:18:07.528182  550023 out.go:309] Setting ErrFile to fd 2...
	I1025 22:18:07.528203  550023 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:18:07.528474  550023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-401064/.minikube/bin
	I1025 22:18:07.528895  550023 out.go:303] Setting JSON to false
	I1025 22:18:07.530105  550023 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7225,"bootTime":1698265063,"procs":387,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 22:18:07.530205  550023 start.go:138] virtualization:  
	I1025 22:18:07.533695  550023 out.go:177] * [false-023705] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1025 22:18:07.536479  550023 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 22:18:07.538365  550023 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 22:18:07.536564  550023 notify.go:220] Checking for updates...
	I1025 22:18:07.540240  550023 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-401064/kubeconfig
	I1025 22:18:07.541867  550023 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-401064/.minikube
	I1025 22:18:07.543238  550023 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 22:18:07.545138  550023 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 22:18:07.547298  550023 config.go:182] Loaded profile config "force-systemd-flag-911794": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1025 22:18:07.547399  550023 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 22:18:07.580427  550023 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1025 22:18:07.580544  550023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 22:18:07.672580  550023 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-25 22:18:07.661884048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1025 22:18:07.672686  550023 docker.go:295] overlay module found
	I1025 22:18:07.674733  550023 out.go:177] * Using the docker driver based on user configuration
	I1025 22:18:07.676507  550023 start.go:298] selected driver: docker
	I1025 22:18:07.676531  550023 start.go:902] validating driver "docker" against <nil>
	I1025 22:18:07.676544  550023 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 22:18:07.678953  550023 out.go:177] 
	W1025 22:18:07.680729  550023 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1025 22:18:07.682247  550023 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-023705 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-023705

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-023705

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-023705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-023705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-023705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-023705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-023705

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-023705

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-023705

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-023705

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-023705

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-023705" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-023705" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-023705

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023705"

                                                
                                                
----------------------- debugLogs end: false-023705 [took: 4.764727879s] --------------------------------
helpers_test.go:175: Cleaning up "false-023705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-023705
--- PASS: TestNetworkPlugins/group/false (5.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (133.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-168378 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E1025 22:20:12.165492  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 22:21:37.303224  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-168378 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m13.948890953s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (133.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-168378 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bb7fbe85-3542-413d-9f8f-2390c013b445] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bb7fbe85-3542-413d-9f8f-2390c013b445] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.037098198s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-168378 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-168378 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-168378 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-168378 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-168378 --alsologtostderr -v=3: (12.264195482s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-168378 -n old-k8s-version-168378
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-168378 -n old-k8s-version-168378: exit status 7 (100.947248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-168378 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (658.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-168378 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-168378 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (10m58.38651026s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-168378 -n old-k8s-version-168378
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (658.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-033975 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1025 22:22:53.634317  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 22:23:34.259002  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-033975 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m11.574560299s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-033975 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a22730a8-88b3-4adf-8556-ac118fdb3d87] Pending
helpers_test.go:344: "busybox" [a22730a8-88b3-4adf-8556-ac118fdb3d87] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a22730a8-88b3-4adf-8556-ac118fdb3d87] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.033303119s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-033975 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-033975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-033975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.104494314s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-033975 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-033975 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-033975 --alsologtostderr -v=3: (12.12153332s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-033975 -n no-preload-033975
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-033975 -n no-preload-033975: exit status 7 (94.802026ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-033975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (339.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-033975 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1025 22:25:12.166274  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 22:27:53.638937  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
E1025 22:28:15.209638  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 22:28:34.259746  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-033975 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m38.602691783s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-033975 -n no-preload-033975
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (339.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hw769" [dc5b0901-68e9-497d-be55-8b5f49152b6f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hw769" [dc5b0901-68e9-497d-be55-8b5f49152b6f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.026072348s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hw769" [dc5b0901-68e9-497d-be55-8b5f49152b6f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011510657s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-033975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-033975 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-033975 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-033975 -n no-preload-033975
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-033975 -n no-preload-033975: exit status 2 (371.484202ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-033975 -n no-preload-033975
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-033975 -n no-preload-033975: exit status 2 (376.517168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-033975 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-033975 -n no-preload-033975
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-033975 -n no-preload-033975
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-045410 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1025 22:30:56.679571  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-045410 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m27.149354198s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-045410 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [55bcd48d-42a8-44cd-a07f-a0248ce7693d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [55bcd48d-42a8-44cd-a07f-a0248ce7693d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.028480972s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-045410 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-045410 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-045410 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.128905109s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-045410 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-045410 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-045410 --alsologtostderr -v=3: (12.164429584s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-045410 -n embed-certs-045410
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-045410 -n embed-certs-045410: exit status 7 (95.990547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-045410 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (341.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-045410 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1025 22:32:53.634111  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-045410 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m41.068453802s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-045410 -n embed-certs-045410
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (341.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-nxjnf" [38c5aad7-ee34-4fb8-9340-d041989d21b6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025933996s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-nxjnf" [38c5aad7-ee34-4fb8-9340-d041989d21b6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009135248s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-168378 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-168378 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-168378 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-168378 -n old-k8s-version-168378
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-168378 -n old-k8s-version-168378: exit status 2 (391.771215ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-168378 -n old-k8s-version-168378
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-168378 -n old-k8s-version-168378: exit status 2 (372.669303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-168378 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-168378 -n old-k8s-version-168378
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-168378 -n old-k8s-version-168378
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-383214 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1025 22:33:34.259621  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 22:33:46.418457  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:33:46.423706  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:33:46.433973  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:33:46.454185  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:33:46.494551  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:33:46.574735  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:33:46.735809  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:33:47.056461  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:33:47.696707  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:33:48.977305  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:33:51.538005  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:33:56.658215  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:34:06.898842  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:34:27.379059  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-383214 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (58.918295764s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-383214 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [473886fa-b528-4330-8c64-fef41d19d30f] Pending
helpers_test.go:344: "busybox" [473886fa-b528-4330-8c64-fef41d19d30f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [473886fa-b528-4330-8c64-fef41d19d30f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.026876283s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-383214 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-383214 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-383214 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.113109278s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-383214 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-383214 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-383214 --alsologtostderr -v=3: (12.387569628s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-383214 -n default-k8s-diff-port-383214
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-383214 -n default-k8s-diff-port-383214: exit status 7 (98.016222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-383214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-383214 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1025 22:35:08.339632  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:35:12.165430  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
E1025 22:36:30.260262  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
E1025 22:36:55.271613  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:36:55.276871  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:36:55.287083  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:36:55.307376  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:36:55.347683  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:36:55.428337  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:36:55.588651  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:36:55.909250  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:36:56.550218  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:36:57.830940  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:37:00.391130  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:37:05.512113  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:37:15.753084  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:37:36.233622  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-383214 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m43.600618326s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-383214 -n default-k8s-diff-port-383214
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8xdz6" [74fccd90-7a78-4968-b352-f90ba9257bdd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1025 22:37:53.633929  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8xdz6" [74fccd90-7a78-4968-b352-f90ba9257bdd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.026368467s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8xdz6" [74fccd90-7a78-4968-b352-f90ba9257bdd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015502739s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-045410 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-045410 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-045410 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-045410 -n embed-certs-045410
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-045410 -n embed-certs-045410: exit status 2 (367.757347ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-045410 -n embed-certs-045410
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-045410 -n embed-certs-045410: exit status 2 (378.033756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-045410 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-045410 -n embed-certs-045410
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-045410 -n embed-certs-045410
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-444827 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1025 22:38:17.193898  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:38:17.304208  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 22:38:34.258994  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 22:38:46.418102  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-444827 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (42.045889866s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-444827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-444827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.374088468s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-444827 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-444827 --alsologtostderr -v=3: (1.296426361s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-444827 -n newest-cni-444827
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-444827 -n newest-cni-444827: exit status 7 (94.4842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-444827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-444827 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1025 22:39:14.100682  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-444827 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (30.303983723s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-444827 -n newest-cni-444827
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-444827 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-444827 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-444827 -n newest-cni-444827
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-444827 -n newest-cni-444827: exit status 2 (375.726401ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-444827 -n newest-cni-444827
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-444827 -n newest-cni-444827: exit status 2 (396.233029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-444827 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-444827 -n newest-cni-444827
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-444827 -n newest-cni-444827
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1025 22:39:39.114779  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:40:12.165906  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m25.756238981s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7snbg" [6dc756ed-6560-4759-bf52-144c27aa29a5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7snbg" [6dc756ed-6560-4759-bf52-144c27aa29a5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.027278287s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7snbg" [6dc756ed-6560-4759-bf52-144c27aa29a5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010834703s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-383214 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-383214 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-383214 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-383214 -n default-k8s-diff-port-383214
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-383214 -n default-k8s-diff-port-383214: exit status 2 (433.319259ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-383214 -n default-k8s-diff-port-383214
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-383214 -n default-k8s-diff-port-383214: exit status 2 (395.814935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-383214 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-383214 -n default-k8s-diff-port-383214
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-383214 -n default-k8s-diff-port-383214
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.72s)
E1025 22:46:41.363385  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/auto-023705/client.crt: no such file or directory
E1025 22:46:55.272396  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-023705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-023705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hrbtb" [174cd4aa-5601-4037-bd06-f861bd645b91] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hrbtb" [174cd4aa-5601-4037-bd06-f861bd645b91] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.012127542s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m26.127462807s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-023705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1025 22:41:55.272515  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
E1025 22:42:22.955008  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/old-k8s-version-168378/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m6.320074346s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-b9ztz" [29a2e5d2-fc9f-42ef-8727-9d76c61d831d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.026775264s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-023705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-023705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nwlfr" [e3589c1c-2fb5-4bf0-a590-0e91ca869fda] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nwlfr" [e3589c1c-2fb5-4bf0-a590-0e91ca869fda] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.012381166s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-023705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vk6pr" [b58e88ad-1ef9-4346-aa99-7519ae63e75e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.045136291s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-023705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-023705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gcg4x" [e6e308c5-f09a-4ef1-9ed0-8b4f8c567d87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 22:42:53.633677  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/ingress-addon-legacy-356915/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-gcg4x" [e6e308c5-f09a-4ef1-9ed0-8b4f8c567d87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.024543232s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-023705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m3.780164429s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1025 22:43:34.258975  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/addons-624750/client.crt: no such file or directory
E1025 22:43:46.418743  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/no-preload-033975/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m31.038168615s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-023705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-023705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rvsgk" [d34ccbfb-3188-4bc4-8f14-7e0bf8e72d84] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rvsgk" [d34ccbfb-3188-4bc4-8f14-7e0bf8e72d84] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.010577266s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-023705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1025 22:44:53.643707  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/default-k8s-diff-port-383214/client.crt: no such file or directory
E1025 22:44:55.210836  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (59.59822035s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-023705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-023705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wg77f" [cc11c3e8-7540-4001-b95f-ec728ef8215b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wg77f" [cc11c3e8-7540-4001-b95f-ec728ef8215b] Running
E1025 22:45:12.165740  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/functional-934322/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.012234485s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-023705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1025 22:45:14.124791  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/default-k8s-diff-port-383214/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-023705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m26.783557361s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rq4mb" [42093438-fe5c-4be8-be04-1d1ecf92c438] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.05898924s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-023705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-023705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-75tfd" [3c20e751-6284-4a8f-9500-28d154d00457] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 22:45:55.085192  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/default-k8s-diff-port-383214/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-75tfd" [3c20e751-6284-4a8f-9500-28d154d00457] Running
E1025 22:46:00.402069  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/auto-023705/client.crt: no such file or directory
E1025 22:46:00.407512  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/auto-023705/client.crt: no such file or directory
E1025 22:46:00.417759  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/auto-023705/client.crt: no such file or directory
E1025 22:46:00.437999  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/auto-023705/client.crt: no such file or directory
E1025 22:46:00.478970  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/auto-023705/client.crt: no such file or directory
E1025 22:46:00.559198  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/auto-023705/client.crt: no such file or directory
E1025 22:46:00.719311  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/auto-023705/client.crt: no such file or directory
E1025 22:46:01.039460  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/auto-023705/client.crt: no such file or directory
E1025 22:46:01.680057  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/auto-023705/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.027450211s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-023705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1025 22:46:02.960954  406453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-401064/.minikube/profiles/auto-023705/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-023705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-023705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ksbq7" [d14dfe67-68d9-4890-b941-b3ec2aa8ad29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ksbq7" [d14dfe67-68d9-4890-b941-b3ec2aa8ad29] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.012396939s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-023705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-023705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (28/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-305055 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-305055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-305055
--- SKIP: TestDownloadOnlyKic (0.65s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-885535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-885535
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-023705 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-023705

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-023705

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-023705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-023705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-023705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-023705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-023705

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-023705

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-023705

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-023705

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-023705

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-023705" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-023705" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-023705

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023705"

                                                
                                                
----------------------- debugLogs end: kubenet-023705 [took: 6.000328485s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-023705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-023705
--- SKIP: TestNetworkPlugins/group/kubenet (6.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-023705 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-023705" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-023705

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-023705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023705"

                                                
                                                
----------------------- debugLogs end: cilium-023705 [took: 5.331813491s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-023705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-023705
--- SKIP: TestNetworkPlugins/group/cilium (5.56s)

                                                
                                    
Copied to clipboard