Test Report: Docker_Linux_docker_arm64 17848

                    
                      4e03e3f64731b9a82b3398fd73787c019520d693:2023-12-21:32379
                    
                

Test fail (4/331)

Order failed test Duration
35 TestAddons/parallel/Ingress 37.79
175 TestIngressAddonLegacy/serial/ValidateIngressAddons 51.18
261 TestStoppedBinaryUpgrade/Upgrade 414.26
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.18
x
+
TestAddons/parallel/Ingress (37.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-203484 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-203484 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-203484 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [070e9a1b-fbbc-4628-bd4a-dc6f3d16be87] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [070e9a1b-fbbc-4628-bd4a-dc6f3d16be87] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003183011s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-203484 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-203484 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-203484 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.064282372s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-203484 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-203484 addons disable ingress-dns --alsologtostderr -v=1: (1.237293178s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-203484 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-203484 addons disable ingress --alsologtostderr -v=1: (7.797285619s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-203484
helpers_test.go:235: (dbg) docker inspect addons-203484:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "20fae3d35c3c4312c3a2f2f0c1f07d80765f081842bf82a197c85167c3e0f4a8",
	        "Created": "2023-12-21T18:04:06.165631288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8714,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-21T18:04:06.546363562Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1814c03459e4d6cfdad8e09772588ae07599742dd01f942aa2f9fe1dbb6d2813",
	        "ResolvConfPath": "/var/lib/docker/containers/20fae3d35c3c4312c3a2f2f0c1f07d80765f081842bf82a197c85167c3e0f4a8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/20fae3d35c3c4312c3a2f2f0c1f07d80765f081842bf82a197c85167c3e0f4a8/hostname",
	        "HostsPath": "/var/lib/docker/containers/20fae3d35c3c4312c3a2f2f0c1f07d80765f081842bf82a197c85167c3e0f4a8/hosts",
	        "LogPath": "/var/lib/docker/containers/20fae3d35c3c4312c3a2f2f0c1f07d80765f081842bf82a197c85167c3e0f4a8/20fae3d35c3c4312c3a2f2f0c1f07d80765f081842bf82a197c85167c3e0f4a8-json.log",
	        "Name": "/addons-203484",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-203484:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-203484",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/099b390722e27a0afbb1d3f4fcd924022a227954a967740e0632bb12a280da02-init/diff:/var/lib/docker/overlay2/608babf4968b91d3754a5a1770f6af5ff35007ee68accb0cb2a42746e0ee2f7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/099b390722e27a0afbb1d3f4fcd924022a227954a967740e0632bb12a280da02/merged",
	                "UpperDir": "/var/lib/docker/overlay2/099b390722e27a0afbb1d3f4fcd924022a227954a967740e0632bb12a280da02/diff",
	                "WorkDir": "/var/lib/docker/overlay2/099b390722e27a0afbb1d3f4fcd924022a227954a967740e0632bb12a280da02/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-203484",
	                "Source": "/var/lib/docker/volumes/addons-203484/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-203484",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-203484",
	                "name.minikube.sigs.k8s.io": "addons-203484",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "42b1c0d3f93775b7f0d8ab646defa8225761f3271be29b4ae2ab70dbd5c2d323",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/42b1c0d3f937",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-203484": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "20fae3d35c3c",
	                        "addons-203484"
	                    ],
	                    "NetworkID": "dd83248826ef5d0007419ee755fdd9e2b5ca41442358fe1d6a7d86f652ec98b6",
	                    "EndpointID": "c72fc117006770f27ea77e39659f1bb999b1afe3ce12fd30a05df13932782caa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-203484 -n addons-203484
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-203484 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-203484 logs -n 25: (1.646624475s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-125953   | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |                     |
	|         | -p download-only-125953              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-125953   | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |                     |
	|         | -p download-only-125953              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-125953   | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |                     |
	|         | -p download-only-125953              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC | 21 Dec 23 18:03 UTC |
	| delete  | -p download-only-125953              | download-only-125953   | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC | 21 Dec 23 18:03 UTC |
	| delete  | -p download-only-125953              | download-only-125953   | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC | 21 Dec 23 18:03 UTC |
	| start   | --download-only -p                   | download-docker-048821 | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |                     |
	|         | download-docker-048821               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-048821            | download-docker-048821 | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC | 21 Dec 23 18:03 UTC |
	| start   | --download-only -p                   | binary-mirror-911256   | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |                     |
	|         | binary-mirror-911256                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42837               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-911256              | binary-mirror-911256   | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC | 21 Dec 23 18:03 UTC |
	| addons  | disable dashboard -p                 | addons-203484          | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |                     |
	|         | addons-203484                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-203484          | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |                     |
	|         | addons-203484                        |                        |         |         |                     |                     |
	| start   | -p addons-203484 --wait=true         | addons-203484          | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC | 21 Dec 23 18:06 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |         |                     |                     |
	|         |  --container-runtime=docker          |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-203484 ip                     | addons-203484          | jenkins | v1.32.0 | 21 Dec 23 18:06 UTC | 21 Dec 23 18:06 UTC |
	| addons  | addons-203484 addons disable         | addons-203484          | jenkins | v1.32.0 | 21 Dec 23 18:06 UTC | 21 Dec 23 18:06 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-203484 addons                 | addons-203484          | jenkins | v1.32.0 | 21 Dec 23 18:06 UTC | 21 Dec 23 18:06 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-203484          | jenkins | v1.32.0 | 21 Dec 23 18:06 UTC | 21 Dec 23 18:06 UTC |
	|         | addons-203484                        |                        |         |         |                     |                     |
	| ssh     | addons-203484 ssh curl -s            | addons-203484          | jenkins | v1.32.0 | 21 Dec 23 18:06 UTC | 21 Dec 23 18:06 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-203484 ip                     | addons-203484          | jenkins | v1.32.0 | 21 Dec 23 18:06 UTC | 21 Dec 23 18:06 UTC |
	| addons  | addons-203484 addons disable         | addons-203484          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC | 21 Dec 23 18:07 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-203484 addons disable         | addons-203484          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC | 21 Dec 23 18:07 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-203484 addons                 | addons-203484          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC | 21 Dec 23 18:07 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-203484 addons                 | addons-203484          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC |                     |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/21 18:03:43
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 18:03:43.113540    8241 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:03:43.113706    8241 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:03:43.113731    8241 out.go:309] Setting ErrFile to fd 2...
	I1221 18:03:43.113749    8241 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:03:43.113996    8241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
	I1221 18:03:43.114434    8241 out.go:303] Setting JSON to false
	I1221 18:03:43.115162    8241 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2770,"bootTime":1703179053,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1221 18:03:43.115251    8241 start.go:138] virtualization:  
	I1221 18:03:43.119134    8241 out.go:177] * [addons-203484] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1221 18:03:43.121049    8241 out.go:177]   - MINIKUBE_LOCATION=17848
	I1221 18:03:43.122647    8241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:03:43.121131    8241 notify.go:220] Checking for updates...
	I1221 18:03:43.124388    8241 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig
	I1221 18:03:43.126483    8241 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	I1221 18:03:43.128315    8241 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1221 18:03:43.129914    8241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 18:03:43.131811    8241 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:03:43.155608    8241 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:03:43.155706    8241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:03:43.232347    8241 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-21 18:03:43.223021875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:03:43.232455    8241 docker.go:295] overlay module found
	I1221 18:03:43.234550    8241 out.go:177] * Using the docker driver based on user configuration
	I1221 18:03:43.236082    8241 start.go:298] selected driver: docker
	I1221 18:03:43.236097    8241 start.go:902] validating driver "docker" against <nil>
	I1221 18:03:43.236109    8241 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 18:03:43.236684    8241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:03:43.309834    8241 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-21 18:03:43.301087115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:03:43.309986    8241 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1221 18:03:43.310225    8241 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 18:03:43.311991    8241 out.go:177] * Using Docker driver with root privileges
	I1221 18:03:43.313920    8241 cni.go:84] Creating CNI manager for ""
	I1221 18:03:43.313948    8241 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1221 18:03:43.313961    8241 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1221 18:03:43.313980    8241 start_flags.go:323] config:
	{Name:addons-203484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-203484 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:03:43.316224    8241 out.go:177] * Starting control plane node addons-203484 in cluster addons-203484
	I1221 18:03:43.318028    8241 cache.go:121] Beginning downloading kic base image for docker with docker
	I1221 18:03:43.319977    8241 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:03:43.321731    8241 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1221 18:03:43.321798    8241 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:03:43.321812    8241 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1221 18:03:43.321823    8241 cache.go:56] Caching tarball of preloaded images
	I1221 18:03:43.321910    8241 preload.go:174] Found /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1221 18:03:43.321919    8241 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1221 18:03:43.322256    8241 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/config.json ...
	I1221 18:03:43.322285    8241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/config.json: {Name:mkf54278eabb29299fc0103e7065b87d02ec578f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:03:43.338660    8241 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1221 18:03:43.338785    8241 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1221 18:03:43.338802    8241 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1221 18:03:43.338806    8241 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1221 18:03:43.338813    8241 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1221 18:03:43.338818    8241 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 from local cache
	I1221 18:03:59.013523    8241 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 from cached tarball
	I1221 18:03:59.013565    8241 cache.go:194] Successfully downloaded all kic artifacts
	I1221 18:03:59.013611    8241 start.go:365] acquiring machines lock for addons-203484: {Name:mkf26840b178a6837bc331aff6f03b8d52bc011c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:03:59.013719    8241 start.go:369] acquired machines lock for "addons-203484" in 87.467µs
	I1221 18:03:59.013749    8241 start.go:93] Provisioning new machine with config: &{Name:addons-203484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-203484 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1221 18:03:59.013834    8241 start.go:125] createHost starting for "" (driver="docker")
	I1221 18:03:59.016184    8241 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1221 18:03:59.016418    8241 start.go:159] libmachine.API.Create for "addons-203484" (driver="docker")
	I1221 18:03:59.016451    8241 client.go:168] LocalClient.Create starting
	I1221 18:03:59.016547    8241 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem
	I1221 18:03:59.476074    8241 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/cert.pem
	I1221 18:03:59.654096    8241 cli_runner.go:164] Run: docker network inspect addons-203484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1221 18:03:59.673494    8241 cli_runner.go:211] docker network inspect addons-203484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1221 18:03:59.673583    8241 network_create.go:281] running [docker network inspect addons-203484] to gather additional debugging logs...
	I1221 18:03:59.673602    8241 cli_runner.go:164] Run: docker network inspect addons-203484
	W1221 18:03:59.690194    8241 cli_runner.go:211] docker network inspect addons-203484 returned with exit code 1
	I1221 18:03:59.690239    8241 network_create.go:284] error running [docker network inspect addons-203484]: docker network inspect addons-203484: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-203484 not found
	I1221 18:03:59.690252    8241 network_create.go:286] output of [docker network inspect addons-203484]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-203484 not found
	
	** /stderr **
	I1221 18:03:59.690358    8241 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:03:59.707669    8241 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40000d1430}
	I1221 18:03:59.707708    8241 network_create.go:124] attempt to create docker network addons-203484 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1221 18:03:59.707762    8241 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-203484 addons-203484
	I1221 18:03:59.772063    8241 network_create.go:108] docker network addons-203484 192.168.49.0/24 created
	I1221 18:03:59.772096    8241 kic.go:121] calculated static IP "192.168.49.2" for the "addons-203484" container
	I1221 18:03:59.772177    8241 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1221 18:03:59.788278    8241 cli_runner.go:164] Run: docker volume create addons-203484 --label name.minikube.sigs.k8s.io=addons-203484 --label created_by.minikube.sigs.k8s.io=true
	I1221 18:03:59.806124    8241 oci.go:103] Successfully created a docker volume addons-203484
	I1221 18:03:59.806210    8241 cli_runner.go:164] Run: docker run --rm --name addons-203484-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-203484 --entrypoint /usr/bin/test -v addons-203484:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1221 18:04:02.093062    8241 cli_runner.go:217] Completed: docker run --rm --name addons-203484-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-203484 --entrypoint /usr/bin/test -v addons-203484:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib: (2.28680171s)
	I1221 18:04:02.093090    8241 oci.go:107] Successfully prepared a docker volume addons-203484
	I1221 18:04:02.093116    8241 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1221 18:04:02.093134    8241 kic.go:194] Starting extracting preloaded images to volume ...
	I1221 18:04:02.093219    8241 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-203484:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1221 18:04:06.080111    8241 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-203484:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.986830325s)
	I1221 18:04:06.080142    8241 kic.go:203] duration metric: took 3.987006 seconds to extract preloaded images to volume
	W1221 18:04:06.080279    8241 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1221 18:04:06.080397    8241 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1221 18:04:06.149559    8241 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-203484 --name addons-203484 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-203484 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-203484 --network addons-203484 --ip 192.168.49.2 --volume addons-203484:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1221 18:04:06.556652    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Running}}
	I1221 18:04:06.586688    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:06.612703    8241 cli_runner.go:164] Run: docker exec addons-203484 stat /var/lib/dpkg/alternatives/iptables
	I1221 18:04:06.678664    8241 oci.go:144] the created container "addons-203484" has a running status.
	I1221 18:04:06.678693    8241 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa...
	I1221 18:04:08.249603    8241 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 18:04:08.270663    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:08.289442    8241 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 18:04:08.289467    8241 kic_runner.go:114] Args: [docker exec --privileged addons-203484 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 18:04:08.345761    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:08.365575    8241 machine.go:88] provisioning docker machine ...
	I1221 18:04:08.365605    8241 ubuntu.go:169] provisioning hostname "addons-203484"
	I1221 18:04:08.365670    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:08.385074    8241 main.go:141] libmachine: Using SSH client type: native
	I1221 18:04:08.385500    8241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1221 18:04:08.385518    8241 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-203484 && echo "addons-203484" | sudo tee /etc/hostname
	I1221 18:04:08.546304    8241 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-203484
	
	I1221 18:04:08.546433    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:08.566845    8241 main.go:141] libmachine: Using SSH client type: native
	I1221 18:04:08.567256    8241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1221 18:04:08.567279    8241 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-203484' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-203484/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-203484' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 18:04:08.720488    8241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1221 18:04:08.720513    8241 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17848-2360/.minikube CaCertPath:/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17848-2360/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17848-2360/.minikube}
	I1221 18:04:08.720546    8241 ubuntu.go:177] setting up certificates
	I1221 18:04:08.720558    8241 provision.go:83] configureAuth start
	I1221 18:04:08.720630    8241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-203484
	I1221 18:04:08.739792    8241 provision.go:138] copyHostCerts
	I1221 18:04:08.739882    8241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17848-2360/.minikube/ca.pem (1082 bytes)
	I1221 18:04:08.740030    8241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17848-2360/.minikube/cert.pem (1123 bytes)
	I1221 18:04:08.740101    8241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17848-2360/.minikube/key.pem (1675 bytes)
	I1221 18:04:08.740152    8241 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17848-2360/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca-key.pem org=jenkins.addons-203484 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-203484]
	I1221 18:04:09.185446    8241 provision.go:172] copyRemoteCerts
	I1221 18:04:09.185522    8241 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 18:04:09.185571    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:09.205015    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:09.309320    8241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1221 18:04:09.337185    8241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1221 18:04:09.364992    8241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 18:04:09.392396    8241 provision.go:86] duration metric: configureAuth took 671.824387ms
	I1221 18:04:09.392422    8241 ubuntu.go:193] setting minikube options for container-runtime
	I1221 18:04:09.392611    8241 config.go:182] Loaded profile config "addons-203484": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1221 18:04:09.392669    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:09.411297    8241 main.go:141] libmachine: Using SSH client type: native
	I1221 18:04:09.411759    8241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1221 18:04:09.411778    8241 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1221 18:04:09.560770    8241 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1221 18:04:09.560789    8241 ubuntu.go:71] root file system type: overlay
	I1221 18:04:09.560896    8241 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1221 18:04:09.560967    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:09.579789    8241 main.go:141] libmachine: Using SSH client type: native
	I1221 18:04:09.580211    8241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1221 18:04:09.580297    8241 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1221 18:04:09.741074    8241 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1221 18:04:09.741155    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:09.762079    8241 main.go:141] libmachine: Using SSH client type: native
	I1221 18:04:09.762490    8241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1221 18:04:09.762515    8241 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1221 18:04:10.576212    8241 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-12-21 18:04:09.735918986 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1221 18:04:10.576242    8241 machine.go:91] provisioned docker machine in 2.210645945s
	I1221 18:04:10.576253    8241 client.go:171] LocalClient.Create took 11.559794363s
	I1221 18:04:10.576265    8241 start.go:167] duration metric: libmachine.API.Create for "addons-203484" took 11.559846975s
	I1221 18:04:10.576273    8241 start.go:300] post-start starting for "addons-203484" (driver="docker")
	I1221 18:04:10.576282    8241 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 18:04:10.576345    8241 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 18:04:10.576393    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:10.595065    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:10.697920    8241 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 18:04:10.702043    8241 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 18:04:10.702123    8241 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1221 18:04:10.702142    8241 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1221 18:04:10.702151    8241 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1221 18:04:10.702160    8241 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-2360/.minikube/addons for local assets ...
	I1221 18:04:10.702229    8241 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-2360/.minikube/files for local assets ...
	I1221 18:04:10.702266    8241 start.go:303] post-start completed in 125.978756ms
	I1221 18:04:10.702566    8241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-203484
	I1221 18:04:10.720395    8241 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/config.json ...
	I1221 18:04:10.720672    8241 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 18:04:10.720728    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:10.742076    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:10.841266    8241 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 18:04:10.846904    8241 start.go:128] duration metric: createHost completed in 11.833055194s
	I1221 18:04:10.846927    8241 start.go:83] releasing machines lock for "addons-203484", held for 11.833194535s
	I1221 18:04:10.846994    8241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-203484
	I1221 18:04:10.866764    8241 ssh_runner.go:195] Run: cat /version.json
	I1221 18:04:10.866830    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:10.867098    8241 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 18:04:10.867153    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:10.889600    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:10.890070    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:10.992431    8241 ssh_runner.go:195] Run: systemctl --version
	I1221 18:04:11.134902    8241 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1221 18:04:11.140401    8241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1221 18:04:11.169069    8241 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1221 18:04:11.169191    8241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:04:11.201746    8241 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1221 18:04:11.201779    8241 start.go:475] detecting cgroup driver to use...
	I1221 18:04:11.201833    8241 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1221 18:04:11.201945    8241 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 18:04:11.221246    8241 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1221 18:04:11.233188    8241 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1221 18:04:11.244238    8241 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1221 18:04:11.244341    8241 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1221 18:04:11.256056    8241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1221 18:04:11.267805    8241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1221 18:04:11.279167    8241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1221 18:04:11.290627    8241 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 18:04:11.301497    8241 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1221 18:04:11.312947    8241 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 18:04:11.322726    8241 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 18:04:11.332229    8241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:04:11.417703    8241 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1221 18:04:11.524971    8241 start.go:475] detecting cgroup driver to use...
	I1221 18:04:11.525014    8241 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1221 18:04:11.525064    8241 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1221 18:04:11.543674    8241 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1221 18:04:11.543742    8241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1221 18:04:11.556807    8241 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 18:04:11.576059    8241 ssh_runner.go:195] Run: which cri-dockerd
	I1221 18:04:11.580448    8241 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1221 18:04:11.590775    8241 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1221 18:04:11.612295    8241 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1221 18:04:11.728255    8241 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1221 18:04:11.838240    8241 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1221 18:04:11.838362    8241 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1221 18:04:11.859863    8241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:04:11.956853    8241 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1221 18:04:12.228807    8241 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1221 18:04:12.325922    8241 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1221 18:04:12.426056    8241 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1221 18:04:12.518854    8241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:04:12.613613    8241 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1221 18:04:12.630283    8241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:04:12.733988    8241 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1221 18:04:12.821459    8241 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1221 18:04:12.821615    8241 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1221 18:04:12.827726    8241 start.go:543] Will wait 60s for crictl version
	I1221 18:04:12.827833    8241 ssh_runner.go:195] Run: which crictl
	I1221 18:04:12.832501    8241 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1221 18:04:12.891460    8241 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1221 18:04:12.891573    8241 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1221 18:04:12.917734    8241 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1221 18:04:12.946642    8241 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1221 18:04:12.946754    8241 cli_runner.go:164] Run: docker network inspect addons-203484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:04:12.964613    8241 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1221 18:04:12.969036    8241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:04:12.981945    8241 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1221 18:04:12.982013    8241 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1221 18:04:13.002848    8241 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1221 18:04:13.002881    8241 docker.go:601] Images already preloaded, skipping extraction
	I1221 18:04:13.002938    8241 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1221 18:04:13.023830    8241 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1221 18:04:13.023852    8241 cache_images.go:84] Images are preloaded, skipping loading
	I1221 18:04:13.023918    8241 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1221 18:04:13.084449    8241 cni.go:84] Creating CNI manager for ""
	I1221 18:04:13.084472    8241 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1221 18:04:13.084500    8241 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1221 18:04:13.084518    8241 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-203484 NodeName:addons-203484 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 18:04:13.084656    8241 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-203484"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 18:04:13.084713    8241 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-203484 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-203484 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1221 18:04:13.084775    8241 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1221 18:04:13.095206    8241 binaries.go:44] Found k8s binaries, skipping transfer
	I1221 18:04:13.095280    8241 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 18:04:13.105595    8241 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1221 18:04:13.127047    8241 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 18:04:13.147977    8241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1221 18:04:13.168906    8241 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1221 18:04:13.173155    8241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:04:13.186108    8241 certs.go:56] Setting up /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484 for IP: 192.168.49.2
	I1221 18:04:13.186141    8241 certs.go:190] acquiring lock for shared ca certs: {Name:mke521584ecf21f65224996fffab5af98b398a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:13.186282    8241 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17848-2360/.minikube/ca.key
	I1221 18:04:13.838881    8241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-2360/.minikube/ca.crt ...
	I1221 18:04:13.838913    8241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/ca.crt: {Name:mke7b3eb6203213d68becb671cbb9fe7138d284d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:13.839102    8241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-2360/.minikube/ca.key ...
	I1221 18:04:13.839114    8241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/ca.key: {Name:mk4caf3e8ea5d91abe1265327b898bf1482706b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:13.839206    8241 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.key
	I1221 18:04:14.194895    8241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.crt ...
	I1221 18:04:14.194924    8241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.crt: {Name:mkd2291b4caae6c983466b8dd26a6f6f84c0854f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:14.195102    8241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.key ...
	I1221 18:04:14.195113    8241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.key: {Name:mkd08b196aebea54c0935e490dea40408ed6d8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:14.195231    8241 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.key
	I1221 18:04:14.195247    8241 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt with IP's: []
	I1221 18:04:14.508780    8241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt ...
	I1221 18:04:14.508809    8241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: {Name:mke8a2421aaea5e52f642dc317421065e7d96610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:14.508985    8241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.key ...
	I1221 18:04:14.508997    8241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.key: {Name:mk7a71a45d37b63597de4b1d1abf9abbb1035c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:14.509076    8241 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/apiserver.key.dd3b5fb2
	I1221 18:04:14.509094    8241 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1221 18:04:15.063637    8241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/apiserver.crt.dd3b5fb2 ...
	I1221 18:04:15.063669    8241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/apiserver.crt.dd3b5fb2: {Name:mk4ef7feb6f49d1c2e921e7e69293698b16948a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:15.063863    8241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/apiserver.key.dd3b5fb2 ...
	I1221 18:04:15.063887    8241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/apiserver.key.dd3b5fb2: {Name:mk53694306d641d3316a61291e6ed02c56921b3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:15.063976    8241 certs.go:337] copying /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/apiserver.crt
	I1221 18:04:15.064052    8241 certs.go:341] copying /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/apiserver.key
	I1221 18:04:15.064101    8241 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/proxy-client.key
	I1221 18:04:15.064121    8241 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/proxy-client.crt with IP's: []
	I1221 18:04:15.589526    8241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/proxy-client.crt ...
	I1221 18:04:15.589558    8241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/proxy-client.crt: {Name:mk04b85fdba7bc3c1102fc2d3663b2e2e36a245d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:15.589734    8241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/proxy-client.key ...
	I1221 18:04:15.589745    8241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/proxy-client.key: {Name:mkacf78cdd80510350f2488c0bd59044d1f46b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:15.589923    8241 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 18:04:15.589964    8241 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem (1082 bytes)
	I1221 18:04:15.589994    8241 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/cert.pem (1123 bytes)
	I1221 18:04:15.590023    8241 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/key.pem (1675 bytes)
	I1221 18:04:15.590588    8241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1221 18:04:15.619293    8241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 18:04:15.647217    8241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 18:04:15.676106    8241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1221 18:04:15.704454    8241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 18:04:15.731833    8241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1221 18:04:15.759702    8241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 18:04:15.786636    8241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1221 18:04:15.813975    8241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 18:04:15.841969    8241 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1221 18:04:15.862946    8241 ssh_runner.go:195] Run: openssl version
	I1221 18:04:15.869943    8241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1221 18:04:15.881414    8241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:04:15.885934    8241 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 21 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:04:15.885999    8241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:04:15.894461    8241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1221 18:04:15.905519    8241 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1221 18:04:15.909940    8241 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1221 18:04:15.909997    8241 kubeadm.go:404] StartCluster: {Name:addons-203484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-203484 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:04:15.910139    8241 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1221 18:04:15.929465    8241 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 18:04:15.940240    8241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 18:04:15.950770    8241 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1221 18:04:15.950832    8241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 18:04:15.961380    8241 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 18:04:15.961423    8241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 18:04:16.013631    8241 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1221 18:04:16.013782    8241 kubeadm.go:322] [preflight] Running pre-flight checks
	I1221 18:04:16.074412    8241 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1221 18:04:16.074488    8241 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1221 18:04:16.074526    8241 kubeadm.go:322] OS: Linux
	I1221 18:04:16.074573    8241 kubeadm.go:322] CGROUPS_CPU: enabled
	I1221 18:04:16.074626    8241 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1221 18:04:16.074675    8241 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1221 18:04:16.074722    8241 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1221 18:04:16.074771    8241 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1221 18:04:16.074832    8241 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1221 18:04:16.074880    8241 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1221 18:04:16.074929    8241 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1221 18:04:16.074977    8241 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1221 18:04:16.157639    8241 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 18:04:16.157751    8241 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 18:04:16.157846    8241 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1221 18:04:16.494259    8241 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 18:04:16.499868    8241 out.go:204]   - Generating certificates and keys ...
	I1221 18:04:16.500000    8241 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1221 18:04:16.500106    8241 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1221 18:04:17.561264    8241 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 18:04:18.011386    8241 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1221 18:04:18.398614    8241 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1221 18:04:20.491659    8241 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1221 18:04:20.744181    8241 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1221 18:04:20.744518    8241 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-203484 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1221 18:04:21.564811    8241 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1221 18:04:21.565061    8241 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-203484 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1221 18:04:21.816377    8241 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 18:04:23.150637    8241 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 18:04:23.933254    8241 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1221 18:04:23.933436    8241 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 18:04:24.508460    8241 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 18:04:24.977072    8241 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 18:04:25.825415    8241 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 18:04:26.616274    8241 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 18:04:26.617062    8241 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 18:04:26.619874    8241 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 18:04:26.622013    8241 out.go:204]   - Booting up control plane ...
	I1221 18:04:26.622107    8241 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 18:04:26.622909    8241 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 18:04:26.624238    8241 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 18:04:26.638942    8241 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 18:04:26.640090    8241 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 18:04:26.640280    8241 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1221 18:04:26.754569    8241 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1221 18:04:34.256925    8241 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502768 seconds
	I1221 18:04:34.257039    8241 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 18:04:34.274823    8241 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 18:04:34.799602    8241 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 18:04:34.800022    8241 kubeadm.go:322] [mark-control-plane] Marking the node addons-203484 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1221 18:04:35.311727    8241 kubeadm.go:322] [bootstrap-token] Using token: kfg59z.6cyrr6n64azyvank
	I1221 18:04:35.314196    8241 out.go:204]   - Configuring RBAC rules ...
	I1221 18:04:35.314351    8241 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 18:04:35.320601    8241 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 18:04:35.327764    8241 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 18:04:35.331060    8241 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 18:04:35.336619    8241 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 18:04:35.339833    8241 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 18:04:35.352249    8241 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 18:04:35.589006    8241 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1221 18:04:35.725511    8241 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1221 18:04:35.728017    8241 kubeadm.go:322] 
	I1221 18:04:35.728097    8241 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1221 18:04:35.728103    8241 kubeadm.go:322] 
	I1221 18:04:35.728176    8241 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1221 18:04:35.728183    8241 kubeadm.go:322] 
	I1221 18:04:35.728220    8241 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1221 18:04:35.730361    8241 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 18:04:35.730418    8241 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 18:04:35.730426    8241 kubeadm.go:322] 
	I1221 18:04:35.730477    8241 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1221 18:04:35.730488    8241 kubeadm.go:322] 
	I1221 18:04:35.730533    8241 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1221 18:04:35.730541    8241 kubeadm.go:322] 
	I1221 18:04:35.730590    8241 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1221 18:04:35.730667    8241 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 18:04:35.730743    8241 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 18:04:35.730752    8241 kubeadm.go:322] 
	I1221 18:04:35.730842    8241 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1221 18:04:35.730922    8241 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1221 18:04:35.730930    8241 kubeadm.go:322] 
	I1221 18:04:35.731014    8241 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token kfg59z.6cyrr6n64azyvank \
	I1221 18:04:35.731113    8241 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2f6b4ffdbf866a02d45b3983f1bb1aea5de717f3ff658b4572e7c4ad93c2235b \
	I1221 18:04:35.731138    8241 kubeadm.go:322] 	--control-plane 
	I1221 18:04:35.731144    8241 kubeadm.go:322] 
	I1221 18:04:35.731235    8241 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1221 18:04:35.731242    8241 kubeadm.go:322] 
	I1221 18:04:35.731326    8241 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token kfg59z.6cyrr6n64azyvank \
	I1221 18:04:35.731438    8241 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2f6b4ffdbf866a02d45b3983f1bb1aea5de717f3ff658b4572e7c4ad93c2235b 
	I1221 18:04:35.738960    8241 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1221 18:04:35.739073    8241 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 18:04:35.739091    8241 cni.go:84] Creating CNI manager for ""
	I1221 18:04:35.739113    8241 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1221 18:04:35.742926    8241 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1221 18:04:35.744801    8241 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1221 18:04:35.758671    8241 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1221 18:04:35.791901    8241 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1221 18:04:35.792031    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:35.792106    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=053db14b71765e8eac0607e1192d5903e3b3dcea minikube.k8s.io/name=addons-203484 minikube.k8s.io/updated_at=2023_12_21T18_04_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:36.118922    8241 ops.go:34] apiserver oom_adj: -16
	I1221 18:04:36.119017    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:36.619462    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:37.119242    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:37.619196    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:38.119449    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:38.619247    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:39.119515    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:39.619547    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:40.119140    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:40.619311    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:41.120080    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:41.619145    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:42.119764    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:42.619154    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:43.119768    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:43.619855    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:44.119144    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:44.619101    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:45.119163    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:45.619892    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:46.119582    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:46.619157    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:47.120002    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:47.619902    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:48.120061    8241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:04:48.226347    8241 kubeadm.go:1088] duration metric: took 12.4343597s to wait for elevateKubeSystemPrivileges.
	I1221 18:04:48.226372    8241 kubeadm.go:406] StartCluster complete in 32.316392574s
	I1221 18:04:48.226388    8241 settings.go:142] acquiring lock: {Name:mk8f5959956e96f0518268d8a4693f16253e6146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:48.226507    8241 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17848-2360/kubeconfig
	I1221 18:04:48.226887    8241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/kubeconfig: {Name:mkd5570705146782261fe0b7e76619864f470748 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:48.227076    8241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1221 18:04:48.227373    8241 config.go:182] Loaded profile config "addons-203484": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1221 18:04:48.227405    8241 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I1221 18:04:48.227474    8241 addons.go:69] Setting yakd=true in profile "addons-203484"
	I1221 18:04:48.227496    8241 addons.go:237] Setting addon yakd=true in "addons-203484"
	I1221 18:04:48.227529    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.228012    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.228510    8241 addons.go:69] Setting cloud-spanner=true in profile "addons-203484"
	I1221 18:04:48.228528    8241 addons.go:237] Setting addon cloud-spanner=true in "addons-203484"
	I1221 18:04:48.228580    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.229001    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.229689    8241 addons.go:69] Setting metrics-server=true in profile "addons-203484"
	I1221 18:04:48.229715    8241 addons.go:237] Setting addon metrics-server=true in "addons-203484"
	I1221 18:04:48.229746    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.230140    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.230480    8241 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-203484"
	I1221 18:04:48.230502    8241 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-203484"
	I1221 18:04:48.230536    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.230927    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.234000    8241 addons.go:69] Setting registry=true in profile "addons-203484"
	I1221 18:04:48.234032    8241 addons.go:237] Setting addon registry=true in "addons-203484"
	I1221 18:04:48.234094    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.234533    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.239416    8241 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-203484"
	I1221 18:04:48.239477    8241 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-203484"
	I1221 18:04:48.239525    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.239954    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.249604    8241 addons.go:69] Setting storage-provisioner=true in profile "addons-203484"
	I1221 18:04:48.249635    8241 addons.go:237] Setting addon storage-provisioner=true in "addons-203484"
	I1221 18:04:48.249704    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.250175    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.251679    8241 addons.go:69] Setting default-storageclass=true in profile "addons-203484"
	I1221 18:04:48.251724    8241 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-203484"
	I1221 18:04:48.252024    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.263555    8241 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-203484"
	I1221 18:04:48.263596    8241 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-203484"
	I1221 18:04:48.264712    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.267581    8241 addons.go:69] Setting gcp-auth=true in profile "addons-203484"
	I1221 18:04:48.267613    8241 mustload.go:65] Loading cluster: addons-203484
	I1221 18:04:48.267815    8241 config.go:182] Loaded profile config "addons-203484": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1221 18:04:48.268068    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.285846    8241 addons.go:69] Setting volumesnapshots=true in profile "addons-203484"
	I1221 18:04:48.285880    8241 addons.go:237] Setting addon volumesnapshots=true in "addons-203484"
	I1221 18:04:48.285936    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.286427    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.309050    8241 addons.go:69] Setting ingress=true in profile "addons-203484"
	I1221 18:04:48.309087    8241 addons.go:237] Setting addon ingress=true in "addons-203484"
	I1221 18:04:48.309146    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.309634    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.328399    8241 addons.go:69] Setting ingress-dns=true in profile "addons-203484"
	I1221 18:04:48.328433    8241 addons.go:237] Setting addon ingress-dns=true in "addons-203484"
	I1221 18:04:48.328511    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.329002    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.361350    8241 addons.go:69] Setting inspektor-gadget=true in profile "addons-203484"
	I1221 18:04:48.361389    8241 addons.go:237] Setting addon inspektor-gadget=true in "addons-203484"
	I1221 18:04:48.361454    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.361928    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.431625    8241 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1221 18:04:48.437530    8241 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I1221 18:04:48.437747    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1221 18:04:48.437865    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:48.531277    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.538604    8241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1221 18:04:48.541685    8241 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1221 18:04:48.543962    8241 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1221 18:04:48.543991    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1221 18:04:48.544051    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:48.543391    8241 out.go:177]   - Using image docker.io/registry:2.8.3
	I1221 18:04:48.543444    8241 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1221 18:04:48.543451    8241 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1221 18:04:48.543455    8241 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I1221 18:04:48.577470    8241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1221 18:04:48.577480    8241 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:04:48.581669    8241 addons.go:237] Setting addon default-storageclass=true in "addons-203484"
	I1221 18:04:48.585117    8241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1221 18:04:48.586831    8241 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1221 18:04:48.586967    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.590867    8241 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 18:04:48.591569    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.591607    8241 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1221 18:04:48.591671    8241 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1221 18:04:48.591706    8241 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1221 18:04:48.596475    8241 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1221 18:04:48.596491    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 18:04:48.606031    8241 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1221 18:04:48.602109    8241 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1221 18:04:48.602256    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:48.602267    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1221 18:04:48.602272    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1221 18:04:48.602277    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1221 18:04:48.624301    8241 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I1221 18:04:48.634078    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1221 18:04:48.634167    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:48.645605    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:48.650562    8241 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I1221 18:04:48.654218    8241 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1221 18:04:48.654237    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1221 18:04:48.654291    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:48.672818    8241 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1221 18:04:48.675067    8241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1221 18:04:48.651798    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:48.651827    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:48.678756    8241 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1221 18:04:48.680584    8241 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1221 18:04:48.680600    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1221 18:04:48.680664    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:48.676783    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:48.706175    8241 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1221 18:04:48.708091    8241 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1221 18:04:48.708111    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1221 18:04:48.708180    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:48.718983    8241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1221 18:04:48.724587    8241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1221 18:04:48.758172    8241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1221 18:04:48.761906    8241 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-203484"
	I1221 18:04:48.762869    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:48.763320    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:48.763532    8241 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1221 18:04:48.763561    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1221 18:04:48.763625    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:48.762253    8241 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-203484" context rescaled to 1 replicas
	I1221 18:04:48.779523    8241 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1221 18:04:48.781490    8241 out.go:177] * Verifying Kubernetes components...
	I1221 18:04:48.810836    8241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:04:48.810727    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:48.917932    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:48.925270    8241 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 18:04:48.925291    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 18:04:48.925350    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:48.925721    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:48.927574    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:48.938845    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:48.954841    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:48.981512    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:48.988530    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:49.031533    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:49.031700    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:49.042054    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:49.044414    8241 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1221 18:04:49.046041    8241 out.go:177]   - Using image docker.io/busybox:stable
	I1221 18:04:49.047998    8241 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1221 18:04:49.048019    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1221 18:04:49.048076    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:49.096608    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:49.281378    8241 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1221 18:04:49.281455    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1221 18:04:49.478073    8241 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I1221 18:04:49.478134    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1221 18:04:49.528529    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 18:04:49.540517    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 18:04:49.561278    8241 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1221 18:04:49.561301    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1221 18:04:49.624198    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1221 18:04:49.678044    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1221 18:04:49.709846    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1221 18:04:49.730435    8241 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1221 18:04:49.730467    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1221 18:04:49.761145    8241 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1221 18:04:49.761218    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1221 18:04:49.764072    8241 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1221 18:04:49.764135    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1221 18:04:49.830512    8241 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1221 18:04:49.830599    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1221 18:04:49.848187    8241 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1221 18:04:49.848252    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1221 18:04:49.887850    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1221 18:04:49.892846    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1221 18:04:50.012980    8241 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1221 18:04:50.013067    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1221 18:04:50.272965    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1221 18:04:50.351159    8241 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1221 18:04:50.351238    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1221 18:04:50.424172    8241 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1221 18:04:50.424243    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1221 18:04:50.433434    8241 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1221 18:04:50.433517    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1221 18:04:50.504820    8241 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1221 18:04:50.504911    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1221 18:04:50.541457    8241 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I1221 18:04:50.541512    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1221 18:04:50.773286    8241 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1221 18:04:50.773350    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1221 18:04:50.780762    8241 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1221 18:04:50.780823    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1221 18:04:50.814639    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1221 18:04:50.817099    8241 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1221 18:04:50.817157    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1221 18:04:50.915002    8241 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1221 18:04:50.915062    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1221 18:04:50.963005    8241 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1221 18:04:50.963084    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1221 18:04:51.108817    8241 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1221 18:04:51.108908    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1221 18:04:51.159024    8241 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1221 18:04:51.159091    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1221 18:04:51.187454    8241 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1221 18:04:51.187514    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1221 18:04:51.273323    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1221 18:04:51.395672    8241 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1221 18:04:51.395740    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1221 18:04:51.482899    8241 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1221 18:04:51.482963    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1221 18:04:51.529134    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1221 18:04:51.634231    8241 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1221 18:04:51.634300    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1221 18:04:51.822585    8241 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I1221 18:04:51.822611    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1221 18:04:51.866183    8241 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1221 18:04:51.866210    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1221 18:04:51.928162    8241 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.342932369s)
	I1221 18:04:51.928192    8241 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1221 18:04:51.928265    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.399675265s)
	I1221 18:04:51.928462    8241 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.116028691s)
	I1221 18:04:51.929321    8241 node_ready.go:35] waiting up to 6m0s for node "addons-203484" to be "Ready" ...
	I1221 18:04:51.937816    8241 node_ready.go:49] node "addons-203484" has status "Ready":"True"
	I1221 18:04:51.937843    8241 node_ready.go:38] duration metric: took 8.498447ms waiting for node "addons-203484" to be "Ready" ...
	I1221 18:04:51.937853    8241 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1221 18:04:51.946862    8241 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r4m6t" in "kube-system" namespace to be "Ready" ...
	I1221 18:04:51.956803    8241 pod_ready.go:92] pod "coredns-5dd5756b68-r4m6t" in "kube-system" namespace has status "Ready":"True"
	I1221 18:04:51.956826    8241 pod_ready.go:81] duration metric: took 9.935151ms waiting for pod "coredns-5dd5756b68-r4m6t" in "kube-system" namespace to be "Ready" ...
	I1221 18:04:51.956837    8241 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-203484" in "kube-system" namespace to be "Ready" ...
	I1221 18:04:51.962158    8241 pod_ready.go:92] pod "etcd-addons-203484" in "kube-system" namespace has status "Ready":"True"
	I1221 18:04:51.962181    8241 pod_ready.go:81] duration metric: took 5.336988ms waiting for pod "etcd-addons-203484" in "kube-system" namespace to be "Ready" ...
	I1221 18:04:51.962192    8241 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-203484" in "kube-system" namespace to be "Ready" ...
	I1221 18:04:51.967628    8241 pod_ready.go:92] pod "kube-apiserver-addons-203484" in "kube-system" namespace has status "Ready":"True"
	I1221 18:04:51.967657    8241 pod_ready.go:81] duration metric: took 5.458113ms waiting for pod "kube-apiserver-addons-203484" in "kube-system" namespace to be "Ready" ...
	I1221 18:04:51.967669    8241 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-203484" in "kube-system" namespace to be "Ready" ...
	I1221 18:04:51.973059    8241 pod_ready.go:92] pod "kube-controller-manager-addons-203484" in "kube-system" namespace has status "Ready":"True"
	I1221 18:04:51.973092    8241 pod_ready.go:81] duration metric: took 5.415117ms waiting for pod "kube-controller-manager-addons-203484" in "kube-system" namespace to be "Ready" ...
	I1221 18:04:51.973103    8241 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9jc8j" in "kube-system" namespace to be "Ready" ...
	I1221 18:04:52.073203    8241 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1221 18:04:52.073240    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1221 18:04:52.096976    8241 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1221 18:04:52.097002    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1221 18:04:52.158725    8241 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1221 18:04:52.158750    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1221 18:04:52.202383    8241 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1221 18:04:52.202407    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1221 18:04:52.236241    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1221 18:04:52.322496    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1221 18:04:52.335589    8241 pod_ready.go:92] pod "kube-proxy-9jc8j" in "kube-system" namespace has status "Ready":"True"
	I1221 18:04:52.335614    8241 pod_ready.go:81] duration metric: took 362.504738ms waiting for pod "kube-proxy-9jc8j" in "kube-system" namespace to be "Ready" ...
	I1221 18:04:52.335634    8241 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-203484" in "kube-system" namespace to be "Ready" ...
	I1221 18:04:52.733196    8241 pod_ready.go:92] pod "kube-scheduler-addons-203484" in "kube-system" namespace has status "Ready":"True"
	I1221 18:04:52.733227    8241 pod_ready.go:81] duration metric: took 397.578342ms waiting for pod "kube-scheduler-addons-203484" in "kube-system" namespace to be "Ready" ...
	I1221 18:04:52.733237    8241 pod_ready.go:38] duration metric: took 795.328329ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1221 18:04:52.733254    8241 api_server.go:52] waiting for apiserver process to appear ...
	I1221 18:04:52.733324    8241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 18:04:54.454404    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.8301701s)
	I1221 18:04:54.454468    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.776400053s)
	I1221 18:04:54.454522    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.744654632s)
	I1221 18:04:54.454720    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.566807936s)
	I1221 18:04:54.454759    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.913771805s)
	I1221 18:04:55.202240    8241 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1221 18:04:55.202328    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:55.231715    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:56.204846    8241 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1221 18:04:56.527727    8241 addons.go:237] Setting addon gcp-auth=true in "addons-203484"
	I1221 18:04:56.527777    8241 host.go:66] Checking if "addons-203484" exists ...
	I1221 18:04:56.528213    8241 cli_runner.go:164] Run: docker container inspect addons-203484 --format={{.State.Status}}
	I1221 18:04:56.555765    8241 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1221 18:04:56.555822    8241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-203484
	I1221 18:04:56.584616    8241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/addons-203484/id_rsa Username:docker}
	I1221 18:04:58.027172    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.21245108s)
	W1221 18:04:58.027213    8241 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1221 18:04:58.027238    8241 retry.go:31] will retry after 328.477394ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1221 18:04:58.027310    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.753925472s)
	I1221 18:04:58.027326    8241 addons.go:473] Verifying addon metrics-server=true in "addons-203484"
	I1221 18:04:58.027395    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.498171678s)
	I1221 18:04:58.030311    8241 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-203484 service yakd-dashboard -n yakd-dashboard
	
	
	I1221 18:04:58.027621    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.754031307s)
	I1221 18:04:58.028333    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.135410928s)
	I1221 18:04:58.033040    8241 addons.go:473] Verifying addon ingress=true in "addons-203484"
	I1221 18:04:58.036506    8241 out.go:177] * Verifying ingress addon...
	I1221 18:04:58.033167    8241 addons.go:473] Verifying addon registry=true in "addons-203484"
	I1221 18:04:58.041343    8241 out.go:177] * Verifying registry addon...
	I1221 18:04:58.039400    8241 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1221 18:04:58.043939    8241 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1221 18:04:58.049747    8241 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1221 18:04:58.049770    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:04:58.051075    8241 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1221 18:04:58.051089    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:04:58.355896    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1221 18:04:58.549776    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:04:58.550156    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:04:59.053378    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:04:59.054147    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:04:59.552510    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:04:59.554029    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:04:59.976950    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.740664283s)
	I1221 18:04:59.976984    8241 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-203484"
	I1221 18:04:59.979322    8241 out.go:177] * Verifying csi-hostpath-driver addon...
	I1221 18:04:59.977144    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.654616145s)
	I1221 18:04:59.977172    8241 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.243831768s)
	I1221 18:04:59.977199    8241 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.421415612s)
	I1221 18:04:59.984304    8241 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1221 18:04:59.981984    8241 api_server.go:72] duration metric: took 11.202426959s to wait for apiserver process to appear ...
	I1221 18:04:59.982633    8241 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1221 18:04:59.992899    8241 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1221 18:04:59.987028    8241 api_server.go:88] waiting for apiserver healthz status ...
	I1221 18:04:59.992455    8241 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1221 18:04:59.995140    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:04:59.995236    8241 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1221 18:04:59.995280    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1221 18:04:59.995429    8241 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1221 18:05:00.004424    8241 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1221 18:05:00.005730    8241 api_server.go:141] control plane version: v1.28.4
	I1221 18:05:00.005780    8241 api_server.go:131] duration metric: took 10.381827ms to wait for apiserver health ...
	I1221 18:05:00.005803    8241 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 18:05:00.022583    8241 system_pods.go:59] 17 kube-system pods found
	I1221 18:05:00.022693    8241 system_pods.go:61] "coredns-5dd5756b68-r4m6t" [c89f4a5f-ed53-4e70-b8c8-21e965275011] Running
	I1221 18:05:00.022719    8241 system_pods.go:61] "csi-hostpath-attacher-0" [cab8e265-303e-40fb-9535-3c76a08330de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1221 18:05:00.022763    8241 system_pods.go:61] "csi-hostpath-resizer-0" [97c67db6-641c-4afe-bcc5-ac779ed7d2f5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1221 18:05:00.022794    8241 system_pods.go:61] "csi-hostpathplugin-pg7d6" [bf6ead2d-849d-4ba5-a75f-8daa6c245fc4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1221 18:05:00.022816    8241 system_pods.go:61] "etcd-addons-203484" [030d881e-6ad8-4303-bea4-d73469500c69] Running
	I1221 18:05:00.022838    8241 system_pods.go:61] "kube-apiserver-addons-203484" [4a7e8deb-f0ae-4479-975a-c52d14ae472e] Running
	I1221 18:05:00.022867    8241 system_pods.go:61] "kube-controller-manager-addons-203484" [6ef0aaf2-eeba-47c3-8fe4-a96b0a48628c] Running
	I1221 18:05:00.022892    8241 system_pods.go:61] "kube-ingress-dns-minikube" [dc7a4c7d-f8d9-447c-a10b-ebbb0b29024a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1221 18:05:00.022913    8241 system_pods.go:61] "kube-proxy-9jc8j" [726b6aa1-3721-47e9-b0c5-6582e2220010] Running
	I1221 18:05:00.022933    8241 system_pods.go:61] "kube-scheduler-addons-203484" [24a82476-4f2f-4408-a444-145816f2bf72] Running
	I1221 18:05:00.022965    8241 system_pods.go:61] "metrics-server-7c66d45ddc-7twwp" [50b8d452-363a-43e1-97fa-ae09bd377626] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1221 18:05:00.022992    8241 system_pods.go:61] "nvidia-device-plugin-daemonset-tx6g6" [037880b9-fb03-4c8d-9f30-d725cf9ea97b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1221 18:05:00.023020    8241 system_pods.go:61] "registry-proxy-h8kcp" [23424afa-efa8-4377-ae08-80f4b38577c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1221 18:05:00.023044    8241 system_pods.go:61] "registry-szhr4" [986c71e3-270d-41e4-9e7f-6efe46c2eb42] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1221 18:05:00.023077    8241 system_pods.go:61] "snapshot-controller-58dbcc7b99-4dtl6" [9ed10031-b07a-405a-a620-dda19dabe378] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 18:05:00.023117    8241 system_pods.go:61] "snapshot-controller-58dbcc7b99-5vbqd" [65bea31b-3e1c-4db7-9c84-05eb2cf7fd7d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 18:05:00.023135    8241 system_pods.go:61] "storage-provisioner" [870d7808-659c-4399-8ba0-9557197ef968] Running
	I1221 18:05:00.023158    8241 system_pods.go:74] duration metric: took 17.337691ms to wait for pod list to return data ...
	I1221 18:05:00.023198    8241 default_sa.go:34] waiting for default service account to be created ...
	I1221 18:05:00.028806    8241 default_sa.go:45] found service account: "default"
	I1221 18:05:00.028832    8241 default_sa.go:55] duration metric: took 5.612372ms for default service account to be created ...
	I1221 18:05:00.028842    8241 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 18:05:00.042454    8241 system_pods.go:86] 17 kube-system pods found
	I1221 18:05:00.042563    8241 system_pods.go:89] "coredns-5dd5756b68-r4m6t" [c89f4a5f-ed53-4e70-b8c8-21e965275011] Running
	I1221 18:05:00.042598    8241 system_pods.go:89] "csi-hostpath-attacher-0" [cab8e265-303e-40fb-9535-3c76a08330de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1221 18:05:00.042640    8241 system_pods.go:89] "csi-hostpath-resizer-0" [97c67db6-641c-4afe-bcc5-ac779ed7d2f5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1221 18:05:00.042668    8241 system_pods.go:89] "csi-hostpathplugin-pg7d6" [bf6ead2d-849d-4ba5-a75f-8daa6c245fc4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1221 18:05:00.042693    8241 system_pods.go:89] "etcd-addons-203484" [030d881e-6ad8-4303-bea4-d73469500c69] Running
	I1221 18:05:00.042714    8241 system_pods.go:89] "kube-apiserver-addons-203484" [4a7e8deb-f0ae-4479-975a-c52d14ae472e] Running
	I1221 18:05:00.042744    8241 system_pods.go:89] "kube-controller-manager-addons-203484" [6ef0aaf2-eeba-47c3-8fe4-a96b0a48628c] Running
	I1221 18:05:00.042772    8241 system_pods.go:89] "kube-ingress-dns-minikube" [dc7a4c7d-f8d9-447c-a10b-ebbb0b29024a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1221 18:05:00.042791    8241 system_pods.go:89] "kube-proxy-9jc8j" [726b6aa1-3721-47e9-b0c5-6582e2220010] Running
	I1221 18:05:00.042812    8241 system_pods.go:89] "kube-scheduler-addons-203484" [24a82476-4f2f-4408-a444-145816f2bf72] Running
	I1221 18:05:00.042841    8241 system_pods.go:89] "metrics-server-7c66d45ddc-7twwp" [50b8d452-363a-43e1-97fa-ae09bd377626] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1221 18:05:00.042867    8241 system_pods.go:89] "nvidia-device-plugin-daemonset-tx6g6" [037880b9-fb03-4c8d-9f30-d725cf9ea97b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1221 18:05:00.042892    8241 system_pods.go:89] "registry-proxy-h8kcp" [23424afa-efa8-4377-ae08-80f4b38577c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1221 18:05:00.042912    8241 system_pods.go:89] "registry-szhr4" [986c71e3-270d-41e4-9e7f-6efe46c2eb42] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1221 18:05:00.042945    8241 system_pods.go:89] "snapshot-controller-58dbcc7b99-4dtl6" [9ed10031-b07a-405a-a620-dda19dabe378] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 18:05:00.042970    8241 system_pods.go:89] "snapshot-controller-58dbcc7b99-5vbqd" [65bea31b-3e1c-4db7-9c84-05eb2cf7fd7d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 18:05:00.042990    8241 system_pods.go:89] "storage-provisioner" [870d7808-659c-4399-8ba0-9557197ef968] Running
	I1221 18:05:00.043014    8241 system_pods.go:126] duration metric: took 14.166049ms to wait for k8s-apps to be running ...
	I1221 18:05:00.043043    8241 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 18:05:00.043122    8241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:05:00.060452    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:00.061096    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:00.070096    8241 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1221 18:05:00.070117    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1221 18:05:00.190487    8241 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1221 18:05:00.190507    8241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1221 18:05:00.282063    8241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1221 18:05:00.387617    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.031610237s)
	I1221 18:05:00.387702    8241 system_svc.go:56] duration metric: took 344.66721ms WaitForService to wait for kubelet.
	I1221 18:05:00.387729    8241 kubeadm.go:581] duration metric: took 11.608176093s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1221 18:05:00.387777    8241 node_conditions.go:102] verifying NodePressure condition ...
	I1221 18:05:00.391291    8241 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1221 18:05:00.391408    8241 node_conditions.go:123] node cpu capacity is 2
	I1221 18:05:00.391444    8241 node_conditions.go:105] duration metric: took 3.650148ms to run NodePressure ...
	I1221 18:05:00.391484    8241 start.go:228] waiting for startup goroutines ...
	I1221 18:05:00.492772    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:00.550565    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:00.551131    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:00.992832    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:01.050754    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:01.051884    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:01.491389    8241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.209248486s)
	I1221 18:05:01.494779    8241 addons.go:473] Verifying addon gcp-auth=true in "addons-203484"
	I1221 18:05:01.500103    8241 out.go:177] * Verifying gcp-auth addon...
	I1221 18:05:01.500085    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:01.503265    8241 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1221 18:05:01.515660    8241 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1221 18:05:01.515723    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:01.550639    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:01.551732    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:01.992692    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:02.007475    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:02.051060    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:02.051959    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:02.492419    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:02.506945    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:02.550920    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:02.551556    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:02.996818    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:03.007003    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:03.050285    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:03.051421    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:03.492524    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:03.506987    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:03.548122    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:03.550674    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:03.992692    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:04.007476    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:04.049680    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:04.049833    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:04.493413    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:04.507327    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:04.549465    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:04.551149    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:04.992659    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:05.006995    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:05.048320    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:05.050789    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:05.492970    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:05.507699    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:05.548220    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:05.550989    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:05.993488    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:06.007461    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:06.049134    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:06.050254    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:06.496296    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:06.507708    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:06.548696    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:06.549876    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:06.992614    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:07.010288    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:07.049861    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:07.050415    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:07.492486    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:07.507134    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:07.548478    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:07.548898    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:07.992487    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:08.007818    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:08.056953    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:08.058099    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:08.492525    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:08.507650    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:08.550741    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:08.551709    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:08.993140    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:09.007900    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:09.050207    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:09.051305    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:09.492339    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:09.507833    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:09.549960    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:09.550617    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:09.992441    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:10.006709    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:10.049270    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:10.050372    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:10.492154    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:10.507720    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:10.559648    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:10.560851    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:10.992169    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:11.007212    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:11.049565    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:11.050037    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:11.491881    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:11.507717    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:11.549910    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:11.550435    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:11.992471    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:12.007577    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:12.048564    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:12.049352    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:12.493137    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:12.507841    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:12.553273    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:12.554824    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:12.992355    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:13.006563    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:13.047919    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:13.048891    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:13.495089    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:13.507688    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:13.548804    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:13.549304    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:13.991925    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:14.007524    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:14.054309    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:14.055260    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:14.493564    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:14.507186    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:14.551139    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:14.551968    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:14.992540    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:15.006659    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:15.047881    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:15.050997    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:15.492584    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:15.507073    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:15.548976    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:15.549440    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:15.992493    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:16.007281    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:16.049667    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:16.050383    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:16.491847    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:16.507379    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:16.548184    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:16.548336    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:16.992592    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:17.007176    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:17.049395    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:17.051288    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:17.493535    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:17.506927    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:17.549803    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:17.551203    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:17.997908    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:18.007065    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:18.047154    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:18.050310    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:18.492250    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:18.508005    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:18.549748    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:18.550556    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:18.992753    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:19.007226    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:19.049240    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:19.049856    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:19.492541    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:19.506897    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:19.553995    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:19.562915    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:19.992160    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:20.009310    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:20.048819    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:20.049711    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:20.493114    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:20.509086    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:20.547377    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:20.550225    8241 kapi.go:107] duration metric: took 22.506286399s to wait for kubernetes.io/minikube-addons=registry ...
	I1221 18:05:20.992748    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:21.007147    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:21.047662    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:21.496195    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:21.508781    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:21.547820    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:21.994140    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:22.007689    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:22.048161    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:22.492359    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:22.507684    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:22.547707    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:22.992201    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:23.007567    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:23.048245    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:23.491823    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:23.507850    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:23.548186    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:23.992364    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:24.006741    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:24.047934    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:24.492467    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:24.506897    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:24.547457    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:24.991889    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:25.007801    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:25.047774    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:25.493280    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:25.507801    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:25.549909    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:25.993158    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:26.007609    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:26.047839    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:26.494474    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:26.512217    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:26.548149    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:26.993236    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:27.007694    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:27.047894    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:27.491926    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:27.507427    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:27.547450    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:27.992452    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:28.007142    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:28.047457    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:28.492294    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:28.507937    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:28.547877    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:28.992387    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:29.006610    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:29.047470    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:29.492584    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:29.506576    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:29.548453    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:29.992453    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:30.007081    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:30.048237    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:30.494025    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:30.508072    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:30.547607    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:30.992289    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:31.007596    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:31.048154    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:31.493570    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:31.508154    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:31.548179    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:31.992905    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:32.007747    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:32.048810    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:32.493609    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:32.507701    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:32.551803    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:32.992882    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:33.007800    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:33.049440    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:33.492278    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:33.507952    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:33.546943    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:33.993144    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:34.007390    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:34.047584    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:34.493202    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:34.511504    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:34.547575    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:34.992398    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:35.006893    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:35.048225    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:35.491872    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:35.507173    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:35.548268    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:35.994596    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:36.010859    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:36.048429    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:36.491879    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:36.506838    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:36.547842    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:36.992049    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:37.007294    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:37.047599    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:37.492415    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:37.506646    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:37.547832    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:37.992698    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:38.009954    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:38.053334    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:38.500450    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:38.507518    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:38.547399    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:38.992377    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:39.007805    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:39.074802    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:39.496975    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:39.507587    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:39.549141    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:39.992791    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:40.006719    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:40.047880    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:40.493004    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:40.506920    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:40.548009    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:40.992277    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:41.008077    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:41.047493    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:41.515614    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:41.516465    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:41.549152    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:41.991903    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:42.008805    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:42.050353    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:42.492512    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:42.507708    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:42.548276    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:42.991754    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:43.006935    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:43.047944    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:43.492252    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:43.507826    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:43.547853    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:43.992450    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:44.006722    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:44.048275    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:44.493076    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:44.507563    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:44.547752    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:44.992577    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:45.007856    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:45.048400    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:45.492569    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:45.507725    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:45.548149    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:45.992698    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:46.006938    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:46.048169    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:46.493691    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:46.507383    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:46.547952    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:46.992018    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:47.007313    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:47.047416    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:47.492919    8241 kapi.go:107] duration metric: took 47.510283118s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1221 18:05:47.507379    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:47.547689    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:48.008127    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:48.047408    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:48.507382    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:48.547662    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:49.007525    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:49.047551    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:49.507272    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:49.548053    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:50.006895    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:50.047880    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:50.507650    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:50.547607    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:51.007662    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:51.048085    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:51.507780    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:51.548083    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:52.007569    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:52.047394    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:52.508194    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:52.547492    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:53.007061    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:53.048466    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:53.508135    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:53.548485    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:54.007201    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:54.047524    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:54.507408    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:54.548318    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:55.006807    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:55.048212    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:55.506837    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:55.547784    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:56.007589    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:56.047850    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:56.507652    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:56.548039    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:57.006616    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:57.047841    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:57.507887    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:57.548211    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:58.007881    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:58.048238    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:58.507366    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:58.547489    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:59.007408    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:59.048047    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:59.506851    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:59.548059    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:00.007064    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:00.048312    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:00.506933    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:00.547748    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:01.006993    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:01.047838    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:01.507704    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:01.547486    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:02.006899    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:02.047493    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:02.508184    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:02.548086    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:03.007799    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:03.048147    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:03.506919    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:03.548070    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:04.006794    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:04.047996    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:04.506611    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:04.548149    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:05.006936    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:05.047671    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:05.507563    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:05.548481    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:06.007798    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:06.048738    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:06.507796    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:06.549808    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:07.007292    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:07.047631    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:07.506652    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:07.548010    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:08.007921    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:08.048594    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:08.508383    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:08.548149    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:09.007725    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:09.048164    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:09.512374    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:09.547752    8241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:10.007690    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:10.048152    8241 kapi.go:107] duration metric: took 1m12.008750136s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1221 18:06:10.507053    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:11.012443    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:11.507418    8241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:12.007790    8241 kapi.go:107] duration metric: took 1m10.504527946s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1221 18:06:12.009739    8241 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-203484 cluster.
	I1221 18:06:12.011429    8241 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1221 18:06:12.013236    8241 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1221 18:06:12.015859    8241 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1221 18:06:12.017574    8241 addons.go:508] enable addons completed in 1m23.79016616s: enabled=[default-storageclass nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner storage-provisioner-rancher metrics-server yakd inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1221 18:06:12.017634    8241 start.go:233] waiting for cluster config update ...
	I1221 18:06:12.017656    8241 start.go:242] writing updated cluster config ...
	I1221 18:06:12.017946    8241 ssh_runner.go:195] Run: rm -f paused
	I1221 18:06:12.376177    8241 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1221 18:06:12.378152    8241 out.go:177] * Done! kubectl is now configured to use "addons-203484" cluster and "default" namespace by default
	
	
	==> Docker <==
	Dec 21 18:06:58 addons-203484 dockerd[1099]: time="2023-12-21T18:06:58.256760688Z" level=info msg="ignoring event" container=e299327c3d7a9d5c0d0aa74cbfd57e50addeb9db409301ba813923a7b0292d9c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:06:59 addons-203484 dockerd[1099]: time="2023-12-21T18:06:59.302207146Z" level=info msg="ignoring event" container=86e8c64aad3821705af493c1578c4a1cfc8c799da517ab9c9b69a3e0c524fe6f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:06 addons-203484 cri-dockerd[1310]: time="2023-12-21T18:07:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/db034957572a261b5e7097df1b38162ace72c9ae9760f379bad55048b48287df/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Dec 21 18:07:06 addons-203484 cri-dockerd[1310]: time="2023-12-21T18:07:06Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Dec 21 18:07:11 addons-203484 dockerd[1099]: time="2023-12-21T18:07:11.736461430Z" level=info msg="ignoring event" container=522bf5a174a925c3a4961f159dfc9ee0636599ec0b0101d0c014bdef5f0573c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:12 addons-203484 dockerd[1099]: time="2023-12-21T18:07:12.749220116Z" level=info msg="ignoring event" container=d4b7a18abf34a8bb270f7200aaa8e87d3df6bd796bc26cdd597f47eff3c2aba2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:12 addons-203484 dockerd[1099]: time="2023-12-21T18:07:12.885238258Z" level=info msg="ignoring event" container=db034957572a261b5e7097df1b38162ace72c9ae9760f379bad55048b48287df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:13 addons-203484 dockerd[1099]: time="2023-12-21T18:07:13.998551505Z" level=info msg="ignoring event" container=3b823c292241249bfb55f5f3235a841ae2509b0d59ec8a0631f867c53c2ccb69 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:14 addons-203484 dockerd[1099]: time="2023-12-21T18:07:14.726564359Z" level=info msg="ignoring event" container=2a2f68471bcecaddf1f986fe1a49020512bb9b4e2ad1d178a5dbb819abbdd539 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:14 addons-203484 dockerd[1099]: time="2023-12-21T18:07:14.764784156Z" level=info msg="ignoring event" container=65041ae90607460986b5ca82851b5b2f84dbd4f83e7c1e949c97b9c5d32422ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:14 addons-203484 dockerd[1099]: time="2023-12-21T18:07:14.800319279Z" level=info msg="ignoring event" container=9fb3202e1491814839be9713704042cf849571474194e3134a84158e3ef485cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:14 addons-203484 dockerd[1099]: time="2023-12-21T18:07:14.810900316Z" level=info msg="ignoring event" container=e4955d6dd8ec2f0962e1dfefddbe13ab179af5a97c3588a407ddf41980311f1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:14 addons-203484 dockerd[1099]: time="2023-12-21T18:07:14.813926311Z" level=info msg="ignoring event" container=8aaaf0e11276e464e345177245c06747ce4f09d54220a356dcef664c3af45743 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:14 addons-203484 dockerd[1099]: time="2023-12-21T18:07:14.816216804Z" level=info msg="ignoring event" container=e2c4704f653856d8b34ecb16c37ef4ab3856b165f3868ea1f6e2fb4703fd0341 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:14 addons-203484 dockerd[1099]: time="2023-12-21T18:07:14.839518773Z" level=info msg="ignoring event" container=15e63b8f6c341ec76294fef72293ee3e0df34aaa539958a7d45f5543125a01f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:14 addons-203484 dockerd[1099]: time="2023-12-21T18:07:14.839575685Z" level=info msg="ignoring event" container=920f3d24652c14194bf98807cce7dba186f1f53c80dc80ad57f8d2989fe762a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:14 addons-203484 dockerd[1099]: time="2023-12-21T18:07:14.973876931Z" level=info msg="ignoring event" container=02047f79d5be16c425c0c412d6b92c3c7b86a6132e011f804fcc6cff2022273c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:15 addons-203484 dockerd[1099]: time="2023-12-21T18:07:15.046812927Z" level=info msg="ignoring event" container=808aa23bfb6a2f958e5e4f3ef3df3104e02f1f6da4bfd5a64adc3e961c597a9c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:15 addons-203484 dockerd[1099]: time="2023-12-21T18:07:15.080604634Z" level=info msg="ignoring event" container=1aa1a1ffce17e9df804c7a49c403df012c6cbc570a2c3bd6e05aa6d52c7060da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:16 addons-203484 dockerd[1099]: time="2023-12-21T18:07:16.187028842Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=817b207f3d194f4e2c59759d62993b617e9c4955d1b3c078f0883c83fcdfe56f
	Dec 21 18:07:16 addons-203484 dockerd[1099]: time="2023-12-21T18:07:16.290780751Z" level=info msg="ignoring event" container=817b207f3d194f4e2c59759d62993b617e9c4955d1b3c078f0883c83fcdfe56f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:16 addons-203484 dockerd[1099]: time="2023-12-21T18:07:16.413362632Z" level=info msg="ignoring event" container=4650c9f3d38c2761514a379a52fbaa836fe425d85615475772d378a26e443cad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:21 addons-203484 dockerd[1099]: time="2023-12-21T18:07:21.628286154Z" level=info msg="ignoring event" container=414617bd04491ff0e0af8b2f3b5b1a899e7516c7e56b2f330b62bf9b3c45cb49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:21 addons-203484 dockerd[1099]: time="2023-12-21T18:07:21.628923044Z" level=info msg="ignoring event" container=0e9665affa29dc0fe134faa8ae3627497e1fde47790020a7c6952eecad08fe6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:07:21 addons-203484 cri-dockerd[1310]: time="2023-12-21T18:07:21Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"snapshot-controller-58dbcc7b99-5vbqd_kube-system\": unexpected command output nsenter: cannot open /proc/4020/ns/net: No such file or directory\n with error: exit status 1"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	3b823c2922412       dd1b12fcb6097                                                                                                                8 seconds ago        Exited              hello-world-app              2                   e0dc48bc75d4c       hello-world-app-5d77478584-tmfcq
	0f95d73c7502d       nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59                                                33 seconds ago       Running             nginx                        0                   546e9c8c70d32       nginx
	f2d90f5a322cb       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 About a minute ago   Running             gcp-auth                     0                   ec3e6e6650b41       gcp-auth-d4c87556c-66p7h
	d3cb29a86d3fa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   About a minute ago   Exited              patch                        0                   814291b8c01ab       ingress-nginx-admission-patch-df5dd
	61ad9118d53e7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   About a minute ago   Exited              create                       0                   29c79e2c9585a       ingress-nginx-admission-create-4hn6x
	9b507887b1cd8       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        About a minute ago   Running             yakd                         0                   568d49515ef45       yakd-dashboard-9947fc6bf-p6t8w
	414617bd04491       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      2 minutes ago        Exited              volume-snapshot-controller   0                   aeacb5ae9014d       snapshot-controller-58dbcc7b99-5vbqd
	0e9665affa29d       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      2 minutes ago        Exited              volume-snapshot-controller   0                   bfbae80c4e470       snapshot-controller-58dbcc7b99-4dtl6
	77f7077701806       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       2 minutes ago        Running             local-path-provisioner       0                   85c3ec3ddd77e       local-path-provisioner-78b46b4d5c-lhmvn
	3c14106b70082       gcr.io/cloud-spanner-emulator/emulator@sha256:9ded3fac22d4d1c85ae51473e3876e2377f5179192fea664409db0fe87e05ece               2 minutes ago        Running             cloud-spanner-emulator       0                   7b33ffe34315e       cloud-spanner-emulator-5649c69bf6-nlxc9
	94fee18d64cc1       nvcr.io/nvidia/k8s-device-plugin@sha256:339be23400f58c04f09b6ba1d4d2e0e7120648f2b114880513685b22093311f1                     2 minutes ago        Running             nvidia-device-plugin-ctr     0                   eb18faf53aea3       nvidia-device-plugin-daemonset-tx6g6
	cfb0c4222ed8c       ba04bb24b9575                                                                                                                2 minutes ago        Running             storage-provisioner          0                   ca8122f074efb       storage-provisioner
	11963d7d0736e       97e04611ad434                                                                                                                2 minutes ago        Running             coredns                      0                   1b09c24af0f70       coredns-5dd5756b68-r4m6t
	f73eb3d72ecd7       3ca3ca488cf13                                                                                                                2 minutes ago        Running             kube-proxy                   0                   cca048b715a0b       kube-proxy-9jc8j
	d43d5e26c7993       05c284c929889                                                                                                                2 minutes ago        Running             kube-scheduler               0                   cc831449d56f3       kube-scheduler-addons-203484
	12878a81fd795       9cdd6470f48c8                                                                                                                2 minutes ago        Running             etcd                         0                   45f56bec607b1       etcd-addons-203484
	3ea292c52c4e3       04b4c447bb9d4                                                                                                                2 minutes ago        Running             kube-apiserver               0                   93f9664354abb       kube-apiserver-addons-203484
	88966a02a3feb       9961cbceaf234                                                                                                                2 minutes ago        Running             kube-controller-manager      0                   632b76818073c       kube-controller-manager-addons-203484
	
	
	==> coredns [11963d7d0736] <==
	[INFO] 10.244.0.19:34227 - 61822 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039738s
	[INFO] 10.244.0.19:34227 - 49285 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004130444s
	[INFO] 10.244.0.19:49970 - 2326 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006626922s
	[INFO] 10.244.0.19:49970 - 21110 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000791478s
	[INFO] 10.244.0.19:34227 - 10086 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002259174s
	[INFO] 10.244.0.19:34227 - 61843 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000154931s
	[INFO] 10.244.0.19:49970 - 26134 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00012449s
	[INFO] 10.244.0.19:49027 - 57850 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000091111s
	[INFO] 10.244.0.19:40773 - 23764 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000132802s
	[INFO] 10.244.0.19:49027 - 59400 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000047524s
	[INFO] 10.244.0.19:40773 - 60885 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056633s
	[INFO] 10.244.0.19:40773 - 56074 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049995s
	[INFO] 10.244.0.19:49027 - 44757 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000069983s
	[INFO] 10.244.0.19:40773 - 42699 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000072033s
	[INFO] 10.244.0.19:49027 - 62571 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000248775s
	[INFO] 10.244.0.19:40773 - 59345 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000092654s
	[INFO] 10.244.0.19:40773 - 34866 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000084933s
	[INFO] 10.244.0.19:49027 - 15238 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040755s
	[INFO] 10.244.0.19:49027 - 40932 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000049724s
	[INFO] 10.244.0.19:40773 - 59651 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001730592s
	[INFO] 10.244.0.19:49027 - 1697 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001413115s
	[INFO] 10.244.0.19:40773 - 42285 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000842982s
	[INFO] 10.244.0.19:49027 - 64822 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001286016s
	[INFO] 10.244.0.19:49027 - 12696 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000135903s
	[INFO] 10.244.0.19:40773 - 63858 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000097364s
	
	
	==> describe nodes <==
	Name:               addons-203484
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-203484
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=053db14b71765e8eac0607e1192d5903e3b3dcea
	                    minikube.k8s.io/name=addons-203484
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_21T18_04_35_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-203484
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 21 Dec 2023 18:04:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-203484
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 21 Dec 2023 18:07:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 21 Dec 2023 18:07:09 +0000   Thu, 21 Dec 2023 18:04:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 21 Dec 2023 18:07:09 +0000   Thu, 21 Dec 2023 18:04:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 21 Dec 2023 18:07:09 +0000   Thu, 21 Dec 2023 18:04:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 21 Dec 2023 18:07:09 +0000   Thu, 21 Dec 2023 18:04:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-203484
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 42969b8ed9e04db5a578eecc44765b0a
	  System UUID:                07d7cd95-b64d-405a-bec9-9628ef9d1a85
	  Boot ID:                    d56f90bc-750b-4b43-9ef5-5f30682d0582
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5649c69bf6-nlxc9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  default                     hello-world-app-5d77478584-tmfcq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-d4c87556c-66p7h                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 coredns-5dd5756b68-r4m6t                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m34s
	  kube-system                 etcd-addons-203484                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m46s
	  kube-system                 kube-apiserver-addons-203484               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 kube-controller-manager-addons-203484      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 kube-proxy-9jc8j                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 kube-scheduler-addons-203484               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 nvidia-device-plugin-daemonset-tx6g6       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  local-path-storage          local-path-provisioner-78b46b4d5c-lhmvn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-p6t8w             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (3%!)(MISSING)  426Mi (5%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m31s  kube-proxy       
	  Normal  Starting                 2m47s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m47s  kubelet          Node addons-203484 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m47s  kubelet          Node addons-203484 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m47s  kubelet          Node addons-203484 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m47s  kubelet          Node addons-203484 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m47s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m46s  kubelet          Node addons-203484 status is now: NodeReady
	  Normal  RegisteredNode           2m35s  node-controller  Node addons-203484 event: Registered Node addons-203484 in Controller
	
	
	==> dmesg <==
	[Dec21 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014904] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.192608] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.842250] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [12878a81fd79] <==
	{"level":"info","ts":"2023-12-21T18:04:28.966049Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-21T18:04:28.96668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-12-21T18:04:28.966763Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-12-21T18:04:28.966798Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-21T18:04:28.966809Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-21T18:04:28.967314Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-21T18:04:28.96739Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-21T18:04:29.153277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-21T18:04:29.153509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-21T18:04:29.153631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-12-21T18:04:29.153746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-12-21T18:04:29.153863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-21T18:04:29.153958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-12-21T18:04:29.154072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-21T18:04:29.15857Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-21T18:04:29.162629Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-203484 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-21T18:04:29.162814Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-21T18:04:29.163989Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-21T18:04:29.164279Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-21T18:04:29.16524Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-21T18:04:29.17888Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-21T18:04:29.179205Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-21T18:04:29.179366Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-21T18:04:29.179711Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-21T18:04:29.179818Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [f2d90f5a322c] <==
	2023/12/21 18:06:10 GCP Auth Webhook started!
	2023/12/21 18:06:23 Ready to marshal response ...
	2023/12/21 18:06:23 Ready to write response ...
	2023/12/21 18:06:30 Ready to marshal response ...
	2023/12/21 18:06:30 Ready to write response ...
	2023/12/21 18:06:46 Ready to marshal response ...
	2023/12/21 18:06:46 Ready to write response ...
	2023/12/21 18:06:55 Ready to marshal response ...
	2023/12/21 18:06:55 Ready to write response ...
	2023/12/21 18:07:05 Ready to marshal response ...
	2023/12/21 18:07:05 Ready to write response ...
	
	
	==> kernel <==
	 18:07:22 up 49 min,  0 users,  load average: 2.02, 1.70, 0.75
	Linux addons-203484 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [3ea292c52c4e] <==
	I1221 18:05:41.324059       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1221 18:05:41.330596       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1221 18:06:32.462133       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1221 18:06:40.033478       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1221 18:06:40.043720       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1221 18:06:41.055867       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1221 18:06:42.285430       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1221 18:06:43.023288       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1221 18:06:45.793415       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1221 18:06:46.114362       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.236.239"}
	I1221 18:06:55.865124       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.239.45"}
	I1221 18:07:21.252511       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:07:21.252557       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 18:07:21.271094       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:07:21.271154       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 18:07:21.296775       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:07:21.296826       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 18:07:21.310153       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:07:21.310197       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 18:07:21.320879       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:07:21.320937       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 18:07:21.337559       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:07:21.337609       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 18:07:21.457899       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:07:21.458101       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [88966a02a3fe] <==
	E1221 18:06:49.858866       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1221 18:06:50.275398       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I1221 18:06:55.611955       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1221 18:06:55.623858       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-tmfcq"
	I1221 18:06:55.638339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="27.040339ms"
	I1221 18:06:55.651756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="13.372278ms"
	I1221 18:06:55.675029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="23.05877ms"
	I1221 18:06:55.675130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="63.23µs"
	W1221 18:06:58.927393       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1221 18:06:58.927423       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1221 18:06:59.181843       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="39.935µs"
	I1221 18:07:00.213064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="49.395µs"
	I1221 18:07:01.235105       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.552µs"
	I1221 18:07:02.801054       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1221 18:07:05.179818       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1221 18:07:13.140850       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1221 18:07:13.146060       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="6.49µs"
	I1221 18:07:13.151230       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1221 18:07:14.494141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="235.941µs"
	I1221 18:07:14.570940       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I1221 18:07:14.654256       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	W1221 18:07:20.378822       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1221 18:07:20.378866       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1221 18:07:21.530781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="8.008µs"
	E1221 18:07:22.313464       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [f73eb3d72ecd] <==
	I1221 18:04:49.886613       1 server_others.go:69] "Using iptables proxy"
	I1221 18:04:50.079663       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1221 18:04:50.381267       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 18:04:50.384127       1 server_others.go:152] "Using iptables Proxier"
	I1221 18:04:50.384159       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1221 18:04:50.384167       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1221 18:04:50.384252       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1221 18:04:50.384490       1 server.go:846] "Version info" version="v1.28.4"
	I1221 18:04:50.384500       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 18:04:50.386009       1 config.go:188] "Starting service config controller"
	I1221 18:04:50.386020       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1221 18:04:50.386039       1 config.go:97] "Starting endpoint slice config controller"
	I1221 18:04:50.386042       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1221 18:04:50.386439       1 config.go:315] "Starting node config controller"
	I1221 18:04:50.386446       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1221 18:04:50.486953       1 shared_informer.go:318] Caches are synced for node config
	I1221 18:04:50.486986       1 shared_informer.go:318] Caches are synced for service config
	I1221 18:04:50.487012       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d43d5e26c799] <==
	W1221 18:04:33.107913       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1221 18:04:33.107969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1221 18:04:33.108044       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1221 18:04:33.108064       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1221 18:04:33.108165       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1221 18:04:33.108185       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1221 18:04:33.108330       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1221 18:04:33.108421       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1221 18:04:33.108552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1221 18:04:33.108572       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1221 18:04:33.108425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1221 18:04:33.108551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1221 18:04:33.108347       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1221 18:04:33.108629       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1221 18:04:33.108497       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1221 18:04:33.108649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1221 18:04:33.108711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1221 18:04:33.108736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1221 18:04:33.108791       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1221 18:04:33.108809       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1221 18:04:33.108912       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1221 18:04:33.108928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1221 18:04:33.109051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1221 18:04:33.109150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1221 18:04:34.698314       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 21 18:07:15 addons-203484 kubelet[2298]: I1221 18:07:15.729248    2298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"15e63b8f6c341ec76294fef72293ee3e0df34aaa539958a7d45f5543125a01f2"} err="failed to get container status \"15e63b8f6c341ec76294fef72293ee3e0df34aaa539958a7d45f5543125a01f2\": rpc error: code = Unknown desc = Error response from daemon: No such container: 15e63b8f6c341ec76294fef72293ee3e0df34aaa539958a7d45f5543125a01f2"
	Dec 21 18:07:15 addons-203484 kubelet[2298]: I1221 18:07:15.729271    2298 scope.go:117] "RemoveContainer" containerID="e2c4704f653856d8b34ecb16c37ef4ab3856b165f3868ea1f6e2fb4703fd0341"
	Dec 21 18:07:15 addons-203484 kubelet[2298]: I1221 18:07:15.729794    2298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e2c4704f653856d8b34ecb16c37ef4ab3856b165f3868ea1f6e2fb4703fd0341"} err="failed to get container status \"e2c4704f653856d8b34ecb16c37ef4ab3856b165f3868ea1f6e2fb4703fd0341\": rpc error: code = Unknown desc = Error response from daemon: No such container: e2c4704f653856d8b34ecb16c37ef4ab3856b165f3868ea1f6e2fb4703fd0341"
	Dec 21 18:07:15 addons-203484 kubelet[2298]: I1221 18:07:15.729818    2298 scope.go:117] "RemoveContainer" containerID="e4955d6dd8ec2f0962e1dfefddbe13ab179af5a97c3588a407ddf41980311f1f"
	Dec 21 18:07:15 addons-203484 kubelet[2298]: I1221 18:07:15.730343    2298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e4955d6dd8ec2f0962e1dfefddbe13ab179af5a97c3588a407ddf41980311f1f"} err="failed to get container status \"e4955d6dd8ec2f0962e1dfefddbe13ab179af5a97c3588a407ddf41980311f1f\": rpc error: code = Unknown desc = Error response from daemon: No such container: e4955d6dd8ec2f0962e1dfefddbe13ab179af5a97c3588a407ddf41980311f1f"
	Dec 21 18:07:15 addons-203484 kubelet[2298]: I1221 18:07:15.733118    2298 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="97c67db6-641c-4afe-bcc5-ac779ed7d2f5" path="/var/lib/kubelet/pods/97c67db6-641c-4afe-bcc5-ac779ed7d2f5/volumes"
	Dec 21 18:07:15 addons-203484 kubelet[2298]: I1221 18:07:15.733477    2298 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bf6ead2d-849d-4ba5-a75f-8daa6c245fc4" path="/var/lib/kubelet/pods/bf6ead2d-849d-4ba5-a75f-8daa6c245fc4/volumes"
	Dec 21 18:07:15 addons-203484 kubelet[2298]: I1221 18:07:15.734067    2298 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cab8e265-303e-40fb-9535-3c76a08330de" path="/var/lib/kubelet/pods/cab8e265-303e-40fb-9535-3c76a08330de/volumes"
	Dec 21 18:07:16 addons-203484 kubelet[2298]: I1221 18:07:16.588858    2298 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d786162e-a0f5-47df-a6a2-899dfb608ddc-webhook-cert\") pod \"d786162e-a0f5-47df-a6a2-899dfb608ddc\" (UID: \"d786162e-a0f5-47df-a6a2-899dfb608ddc\") "
	Dec 21 18:07:16 addons-203484 kubelet[2298]: I1221 18:07:16.588933    2298 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2hmv\" (UniqueName: \"kubernetes.io/projected/d786162e-a0f5-47df-a6a2-899dfb608ddc-kube-api-access-l2hmv\") pod \"d786162e-a0f5-47df-a6a2-899dfb608ddc\" (UID: \"d786162e-a0f5-47df-a6a2-899dfb608ddc\") "
	Dec 21 18:07:16 addons-203484 kubelet[2298]: I1221 18:07:16.594370    2298 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d786162e-a0f5-47df-a6a2-899dfb608ddc-kube-api-access-l2hmv" (OuterVolumeSpecName: "kube-api-access-l2hmv") pod "d786162e-a0f5-47df-a6a2-899dfb608ddc" (UID: "d786162e-a0f5-47df-a6a2-899dfb608ddc"). InnerVolumeSpecName "kube-api-access-l2hmv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 21 18:07:16 addons-203484 kubelet[2298]: I1221 18:07:16.597089    2298 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d786162e-a0f5-47df-a6a2-899dfb608ddc-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d786162e-a0f5-47df-a6a2-899dfb608ddc" (UID: "d786162e-a0f5-47df-a6a2-899dfb608ddc"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 21 18:07:16 addons-203484 kubelet[2298]: I1221 18:07:16.611536    2298 scope.go:117] "RemoveContainer" containerID="817b207f3d194f4e2c59759d62993b617e9c4955d1b3c078f0883c83fcdfe56f"
	Dec 21 18:07:16 addons-203484 kubelet[2298]: I1221 18:07:16.637427    2298 scope.go:117] "RemoveContainer" containerID="817b207f3d194f4e2c59759d62993b617e9c4955d1b3c078f0883c83fcdfe56f"
	Dec 21 18:07:16 addons-203484 kubelet[2298]: E1221 18:07:16.638664    2298 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 817b207f3d194f4e2c59759d62993b617e9c4955d1b3c078f0883c83fcdfe56f" containerID="817b207f3d194f4e2c59759d62993b617e9c4955d1b3c078f0883c83fcdfe56f"
	Dec 21 18:07:16 addons-203484 kubelet[2298]: I1221 18:07:16.638713    2298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"817b207f3d194f4e2c59759d62993b617e9c4955d1b3c078f0883c83fcdfe56f"} err="failed to get container status \"817b207f3d194f4e2c59759d62993b617e9c4955d1b3c078f0883c83fcdfe56f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 817b207f3d194f4e2c59759d62993b617e9c4955d1b3c078f0883c83fcdfe56f"
	Dec 21 18:07:16 addons-203484 kubelet[2298]: I1221 18:07:16.690185    2298 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d786162e-a0f5-47df-a6a2-899dfb608ddc-webhook-cert\") on node \"addons-203484\" DevicePath \"\""
	Dec 21 18:07:16 addons-203484 kubelet[2298]: I1221 18:07:16.690231    2298 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l2hmv\" (UniqueName: \"kubernetes.io/projected/d786162e-a0f5-47df-a6a2-899dfb608ddc-kube-api-access-l2hmv\") on node \"addons-203484\" DevicePath \"\""
	Dec 21 18:07:17 addons-203484 kubelet[2298]: I1221 18:07:17.730669    2298 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d786162e-a0f5-47df-a6a2-899dfb608ddc" path="/var/lib/kubelet/pods/d786162e-a0f5-47df-a6a2-899dfb608ddc/volumes"
	Dec 21 18:07:22 addons-203484 kubelet[2298]: I1221 18:07:22.025287    2298 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj2hd\" (UniqueName: \"kubernetes.io/projected/9ed10031-b07a-405a-a620-dda19dabe378-kube-api-access-vj2hd\") pod \"9ed10031-b07a-405a-a620-dda19dabe378\" (UID: \"9ed10031-b07a-405a-a620-dda19dabe378\") "
	Dec 21 18:07:22 addons-203484 kubelet[2298]: I1221 18:07:22.025352    2298 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7zzx\" (UniqueName: \"kubernetes.io/projected/65bea31b-3e1c-4db7-9c84-05eb2cf7fd7d-kube-api-access-f7zzx\") pod \"65bea31b-3e1c-4db7-9c84-05eb2cf7fd7d\" (UID: \"65bea31b-3e1c-4db7-9c84-05eb2cf7fd7d\") "
	Dec 21 18:07:22 addons-203484 kubelet[2298]: I1221 18:07:22.028480    2298 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed10031-b07a-405a-a620-dda19dabe378-kube-api-access-vj2hd" (OuterVolumeSpecName: "kube-api-access-vj2hd") pod "9ed10031-b07a-405a-a620-dda19dabe378" (UID: "9ed10031-b07a-405a-a620-dda19dabe378"). InnerVolumeSpecName "kube-api-access-vj2hd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 21 18:07:22 addons-203484 kubelet[2298]: I1221 18:07:22.028747    2298 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65bea31b-3e1c-4db7-9c84-05eb2cf7fd7d-kube-api-access-f7zzx" (OuterVolumeSpecName: "kube-api-access-f7zzx") pod "65bea31b-3e1c-4db7-9c84-05eb2cf7fd7d" (UID: "65bea31b-3e1c-4db7-9c84-05eb2cf7fd7d"). InnerVolumeSpecName "kube-api-access-f7zzx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 21 18:07:22 addons-203484 kubelet[2298]: I1221 18:07:22.126150    2298 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vj2hd\" (UniqueName: \"kubernetes.io/projected/9ed10031-b07a-405a-a620-dda19dabe378-kube-api-access-vj2hd\") on node \"addons-203484\" DevicePath \"\""
	Dec 21 18:07:22 addons-203484 kubelet[2298]: I1221 18:07:22.126186    2298 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f7zzx\" (UniqueName: \"kubernetes.io/projected/65bea31b-3e1c-4db7-9c84-05eb2cf7fd7d-kube-api-access-f7zzx\") on node \"addons-203484\" DevicePath \"\""
	
	
	==> storage-provisioner [cfb0c4222ed8] <==
	I1221 18:04:56.202455       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 18:04:56.257097       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 18:04:56.257199       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1221 18:04:56.274354       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 18:04:56.274552       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-203484_da80d509-1a8b-4ca3-8388-477eb469691d!
	I1221 18:04:56.293729       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"275c467d-d04b-4079-8d03-0949e021a979", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-203484_da80d509-1a8b-4ca3-8388-477eb469691d became leader
	I1221 18:04:56.375087       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-203484_da80d509-1a8b-4ca3-8388-477eb469691d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-203484 -n addons-203484
helpers_test.go:261: (dbg) Run:  kubectl --context addons-203484 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (37.79s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (51.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-310121 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-310121 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.653831975s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-310121 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-310121 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2e94c9dd-8677-4770-982c-f6210d1aaf91] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2e94c9dd-8677-4770-982c-f6210d1aaf91] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.003018799s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-310121 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-310121 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-310121 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1221 18:16:12.431108    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.010270953s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-310121 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-310121 addons disable ingress-dns --alsologtostderr -v=1: (4.511148102s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-310121 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-310121 addons disable ingress --alsologtostderr -v=1: (7.500574827s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-310121
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-310121:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a86b13a158919c9e8be75cee1c0a9369dc8a3c09491c6b747492b42a3ddab718",
	        "Created": "2023-12-21T18:14:08.376304057Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 55566,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-21T18:14:08.691418262Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1814c03459e4d6cfdad8e09772588ae07599742dd01f942aa2f9fe1dbb6d2813",
	        "ResolvConfPath": "/var/lib/docker/containers/a86b13a158919c9e8be75cee1c0a9369dc8a3c09491c6b747492b42a3ddab718/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a86b13a158919c9e8be75cee1c0a9369dc8a3c09491c6b747492b42a3ddab718/hostname",
	        "HostsPath": "/var/lib/docker/containers/a86b13a158919c9e8be75cee1c0a9369dc8a3c09491c6b747492b42a3ddab718/hosts",
	        "LogPath": "/var/lib/docker/containers/a86b13a158919c9e8be75cee1c0a9369dc8a3c09491c6b747492b42a3ddab718/a86b13a158919c9e8be75cee1c0a9369dc8a3c09491c6b747492b42a3ddab718-json.log",
	        "Name": "/ingress-addon-legacy-310121",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-310121:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-310121",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/df0cc4416db2f86f2df2eb65a133011626c5be7b911065a30fc2c71f02705e01-init/diff:/var/lib/docker/overlay2/608babf4968b91d3754a5a1770f6af5ff35007ee68accb0cb2a42746e0ee2f7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/df0cc4416db2f86f2df2eb65a133011626c5be7b911065a30fc2c71f02705e01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/df0cc4416db2f86f2df2eb65a133011626c5be7b911065a30fc2c71f02705e01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/df0cc4416db2f86f2df2eb65a133011626c5be7b911065a30fc2c71f02705e01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-310121",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-310121/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-310121",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-310121",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-310121",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c356438d322b37767de56c266f6305fe07acf8f013289cd79d2d2572f8691b5d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c356438d322b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-310121": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a86b13a15891",
	                        "ingress-addon-legacy-310121"
	                    ],
	                    "NetworkID": "4662a4c8fa2fa515491911d3405026dada64e6d967ae2a752af229e45f087e27",
	                    "EndpointID": "22550bf4886880136d0dd6498d241ba3d36cda0bb3d8aed63e9fa77bbff9df75",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-310121 -n ingress-addon-legacy-310121
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-310121 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-310121 logs -n 25: (1.012414135s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-881514 image ls                                               | functional-881514           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	| image   | functional-881514 image load                                             | functional-881514           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-881514 image ls                                               | functional-881514           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	| image   | functional-881514 image save --daemon                                    | functional-881514           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-881514                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-881514                                                        | functional-881514           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|         | image ls --format yaml                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-881514                                                        | functional-881514           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|         | image ls --format short                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| ssh     | functional-881514 ssh pgrep                                              | functional-881514           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC |                     |
	|         | buildkitd                                                                |                             |         |         |                     |                     |
	| image   | functional-881514                                                        | functional-881514           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|         | image ls --format json                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-881514 image build -t                                         | functional-881514           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|         | localhost/my-image:functional-881514                                     |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                         |                             |         |         |                     |                     |
	| image   | functional-881514                                                        | functional-881514           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|         | image ls --format table                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-881514 image ls                                               | functional-881514           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	| delete  | -p functional-881514                                                     | functional-881514           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	| start   | -p image-309766                                                          | image-309766                | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|         | --driver=docker                                                          |                             |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                      | image-309766                | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|         | ./testdata/image-build/test-normal                                       |                             |         |         |                     |                     |
	|         | -p image-309766                                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                      | image-309766                | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|         | --build-opt=build-arg=ENV_A=test_env_str                                 |                             |         |         |                     |                     |
	|         | --build-opt=no-cache                                                     |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p                                       |                             |         |         |                     |                     |
	|         | image-309766                                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                      | image-309766                | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|         | ./testdata/image-build/test-normal                                       |                             |         |         |                     |                     |
	|         | --build-opt=no-cache -p                                                  |                             |         |         |                     |                     |
	|         | image-309766                                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                      | image-309766                | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|         | -f inner/Dockerfile                                                      |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-f                                            |                             |         |         |                     |                     |
	|         | -p image-309766                                                          |                             |         |         |                     |                     |
	| delete  | -p image-309766                                                          | image-309766                | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	| start   | -p ingress-addon-legacy-310121                                           | ingress-addon-legacy-310121 | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:15 UTC |
	|         | --kubernetes-version=v1.18.20                                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                     |                             |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-310121                                              | ingress-addon-legacy-310121 | jenkins | v1.32.0 | 21 Dec 23 18:15 UTC | 21 Dec 23 18:15 UTC |
	|         | addons enable ingress                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-310121                                              | ingress-addon-legacy-310121 | jenkins | v1.32.0 | 21 Dec 23 18:15 UTC | 21 Dec 23 18:15 UTC |
	|         | addons enable ingress-dns                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-310121                                              | ingress-addon-legacy-310121 | jenkins | v1.32.0 | 21 Dec 23 18:16 UTC | 21 Dec 23 18:16 UTC |
	|         | ssh curl -s http://127.0.0.1/                                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-310121 ip                                           | ingress-addon-legacy-310121 | jenkins | v1.32.0 | 21 Dec 23 18:16 UTC | 21 Dec 23 18:16 UTC |
	| addons  | ingress-addon-legacy-310121                                              | ingress-addon-legacy-310121 | jenkins | v1.32.0 | 21 Dec 23 18:16 UTC | 21 Dec 23 18:16 UTC |
	|         | addons disable ingress-dns                                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-310121                                              | ingress-addon-legacy-310121 | jenkins | v1.32.0 | 21 Dec 23 18:16 UTC | 21 Dec 23 18:16 UTC |
	|         | addons disable ingress                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/21 18:13:49
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 18:13:49.865333   55107 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:13:49.865501   55107 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:13:49.865510   55107 out.go:309] Setting ErrFile to fd 2...
	I1221 18:13:49.865516   55107 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:13:49.865759   55107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
	I1221 18:13:49.866170   55107 out.go:303] Setting JSON to false
	I1221 18:13:49.866972   55107 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3377,"bootTime":1703179053,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1221 18:13:49.867041   55107 start.go:138] virtualization:  
	I1221 18:13:49.869802   55107 out.go:177] * [ingress-addon-legacy-310121] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1221 18:13:49.872587   55107 out.go:177]   - MINIKUBE_LOCATION=17848
	I1221 18:13:49.874402   55107 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:13:49.872794   55107 notify.go:220] Checking for updates...
	I1221 18:13:49.878130   55107 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig
	I1221 18:13:49.880458   55107 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	I1221 18:13:49.882433   55107 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1221 18:13:49.884274   55107 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 18:13:49.886381   55107 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:13:49.909914   55107 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:13:49.910021   55107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:13:50.006703   55107 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-21 18:13:49.997273839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:13:50.006803   55107 docker.go:295] overlay module found
	I1221 18:13:50.010323   55107 out.go:177] * Using the docker driver based on user configuration
	I1221 18:13:50.012228   55107 start.go:298] selected driver: docker
	I1221 18:13:50.012248   55107 start.go:902] validating driver "docker" against <nil>
	I1221 18:13:50.012267   55107 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 18:13:50.012994   55107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:13:50.083849   55107 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-21 18:13:50.074684801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:13:50.084018   55107 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1221 18:13:50.084273   55107 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 18:13:50.086073   55107 out.go:177] * Using Docker driver with root privileges
	I1221 18:13:50.087813   55107 cni.go:84] Creating CNI manager for ""
	I1221 18:13:50.087844   55107 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1221 18:13:50.087862   55107 start_flags.go:323] config:
	{Name:ingress-addon-legacy-310121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-310121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:13:50.090203   55107 out.go:177] * Starting control plane node ingress-addon-legacy-310121 in cluster ingress-addon-legacy-310121
	I1221 18:13:50.092202   55107 cache.go:121] Beginning downloading kic base image for docker with docker
	I1221 18:13:50.093969   55107 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:13:50.095864   55107 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1221 18:13:50.095956   55107 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:13:50.113475   55107 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon, skipping pull
	I1221 18:13:50.113497   55107 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in daemon, skipping load
	I1221 18:13:50.170159   55107 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1221 18:13:50.170189   55107 cache.go:56] Caching tarball of preloaded images
	I1221 18:13:50.170351   55107 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1221 18:13:50.172563   55107 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1221 18:13:50.174415   55107 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1221 18:13:50.296151   55107 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1221 18:14:01.142791   55107 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1221 18:14:01.142892   55107 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1221 18:14:02.251594   55107 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1221 18:14:02.251965   55107 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/config.json ...
	I1221 18:14:02.251998   55107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/config.json: {Name:mk4e57d7a5a5f229360a0ba1b0083c1d01feac50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:02.252177   55107 cache.go:194] Successfully downloaded all kic artifacts
	I1221 18:14:02.252221   55107 start.go:365] acquiring machines lock for ingress-addon-legacy-310121: {Name:mk6cd77b232a3b813ba5432235a9782af2097021 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:14:02.252279   55107 start.go:369] acquired machines lock for "ingress-addon-legacy-310121" in 44.112µs
	I1221 18:14:02.252299   55107 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-310121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-310121 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1221 18:14:02.252373   55107 start.go:125] createHost starting for "" (driver="docker")
	I1221 18:14:02.254900   55107 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1221 18:14:02.255201   55107 start.go:159] libmachine.API.Create for "ingress-addon-legacy-310121" (driver="docker")
	I1221 18:14:02.255236   55107 client.go:168] LocalClient.Create starting
	I1221 18:14:02.255310   55107 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem
	I1221 18:14:02.255361   55107 main.go:141] libmachine: Decoding PEM data...
	I1221 18:14:02.255382   55107 main.go:141] libmachine: Parsing certificate...
	I1221 18:14:02.255440   55107 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17848-2360/.minikube/certs/cert.pem
	I1221 18:14:02.255463   55107 main.go:141] libmachine: Decoding PEM data...
	I1221 18:14:02.255478   55107 main.go:141] libmachine: Parsing certificate...
	I1221 18:14:02.255843   55107 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-310121 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1221 18:14:02.272703   55107 cli_runner.go:211] docker network inspect ingress-addon-legacy-310121 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1221 18:14:02.272803   55107 network_create.go:281] running [docker network inspect ingress-addon-legacy-310121] to gather additional debugging logs...
	I1221 18:14:02.272824   55107 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-310121
	W1221 18:14:02.289799   55107 cli_runner.go:211] docker network inspect ingress-addon-legacy-310121 returned with exit code 1
	I1221 18:14:02.289833   55107 network_create.go:284] error running [docker network inspect ingress-addon-legacy-310121]: docker network inspect ingress-addon-legacy-310121: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-310121 not found
	I1221 18:14:02.289849   55107 network_create.go:286] output of [docker network inspect ingress-addon-legacy-310121]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-310121 not found
	
	** /stderr **
	I1221 18:14:02.289978   55107 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:14:02.307058   55107 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004e1c60}
	I1221 18:14:02.307109   55107 network_create.go:124] attempt to create docker network ingress-addon-legacy-310121 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1221 18:14:02.307166   55107 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-310121 ingress-addon-legacy-310121
	I1221 18:14:02.372493   55107 network_create.go:108] docker network ingress-addon-legacy-310121 192.168.49.0/24 created
	I1221 18:14:02.372527   55107 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-310121" container
	I1221 18:14:02.372626   55107 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1221 18:14:02.389073   55107 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-310121 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-310121 --label created_by.minikube.sigs.k8s.io=true
	I1221 18:14:02.408481   55107 oci.go:103] Successfully created a docker volume ingress-addon-legacy-310121
	I1221 18:14:02.408560   55107 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-310121-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-310121 --entrypoint /usr/bin/test -v ingress-addon-legacy-310121:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1221 18:14:03.696973   55107 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-310121-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-310121 --entrypoint /usr/bin/test -v ingress-addon-legacy-310121:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib: (1.288372711s)
	I1221 18:14:03.697004   55107 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-310121
	I1221 18:14:03.697023   55107 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1221 18:14:03.697040   55107 kic.go:194] Starting extracting preloaded images to volume ...
	I1221 18:14:03.697122   55107 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-310121:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1221 18:14:08.284357   55107 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-310121:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.587188127s)
	I1221 18:14:08.284390   55107 kic.go:203] duration metric: took 4.587345 seconds to extract preloaded images to volume
	W1221 18:14:08.284537   55107 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1221 18:14:08.284644   55107 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1221 18:14:08.360062   55107 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-310121 --name ingress-addon-legacy-310121 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-310121 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-310121 --network ingress-addon-legacy-310121 --ip 192.168.49.2 --volume ingress-addon-legacy-310121:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1221 18:14:08.698875   55107 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-310121 --format={{.State.Running}}
	I1221 18:14:08.723609   55107 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-310121 --format={{.State.Status}}
	I1221 18:14:08.751041   55107 cli_runner.go:164] Run: docker exec ingress-addon-legacy-310121 stat /var/lib/dpkg/alternatives/iptables
	I1221 18:14:08.825261   55107 oci.go:144] the created container "ingress-addon-legacy-310121" has a running status.
	I1221 18:14:08.825286   55107 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17848-2360/.minikube/machines/ingress-addon-legacy-310121/id_rsa...
	I1221 18:14:09.609077   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/machines/ingress-addon-legacy-310121/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1221 18:14:09.609126   55107 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17848-2360/.minikube/machines/ingress-addon-legacy-310121/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 18:14:09.637639   55107 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-310121 --format={{.State.Status}}
	I1221 18:14:09.663812   55107 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 18:14:09.663830   55107 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-310121 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 18:14:09.724966   55107 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-310121 --format={{.State.Status}}
	I1221 18:14:09.751512   55107 machine.go:88] provisioning docker machine ...
	I1221 18:14:09.751542   55107 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-310121"
	I1221 18:14:09.751615   55107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-310121
	I1221 18:14:09.773015   55107 main.go:141] libmachine: Using SSH client type: native
	I1221 18:14:09.773447   55107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1221 18:14:09.773460   55107 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-310121 && echo "ingress-addon-legacy-310121" | sudo tee /etc/hostname
	I1221 18:14:09.947635   55107 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-310121
	
	I1221 18:14:09.947786   55107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-310121
	I1221 18:14:09.967164   55107 main.go:141] libmachine: Using SSH client type: native
	I1221 18:14:09.967599   55107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1221 18:14:09.967619   55107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-310121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-310121/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-310121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 18:14:10.116360   55107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1221 18:14:10.116386   55107 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17848-2360/.minikube CaCertPath:/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17848-2360/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17848-2360/.minikube}
	I1221 18:14:10.116412   55107 ubuntu.go:177] setting up certificates
	I1221 18:14:10.116422   55107 provision.go:83] configureAuth start
	I1221 18:14:10.116480   55107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-310121
	I1221 18:14:10.136732   55107 provision.go:138] copyHostCerts
	I1221 18:14:10.136770   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17848-2360/.minikube/ca.pem
	I1221 18:14:10.136799   55107 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-2360/.minikube/ca.pem, removing ...
	I1221 18:14:10.136810   55107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-2360/.minikube/ca.pem
	I1221 18:14:10.136886   55107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17848-2360/.minikube/ca.pem (1082 bytes)
	I1221 18:14:10.136962   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17848-2360/.minikube/cert.pem
	I1221 18:14:10.136985   55107 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-2360/.minikube/cert.pem, removing ...
	I1221 18:14:10.136993   55107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-2360/.minikube/cert.pem
	I1221 18:14:10.137018   55107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17848-2360/.minikube/cert.pem (1123 bytes)
	I1221 18:14:10.137060   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17848-2360/.minikube/key.pem
	I1221 18:14:10.137081   55107 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-2360/.minikube/key.pem, removing ...
	I1221 18:14:10.137087   55107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-2360/.minikube/key.pem
	I1221 18:14:10.137111   55107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17848-2360/.minikube/key.pem (1675 bytes)
	I1221 18:14:10.137188   55107 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17848-2360/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-310121 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-310121]
	I1221 18:14:10.372756   55107 provision.go:172] copyRemoteCerts
	I1221 18:14:10.372827   55107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 18:14:10.372867   55107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-310121
	I1221 18:14:10.390507   55107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/ingress-addon-legacy-310121/id_rsa Username:docker}
	I1221 18:14:10.493410   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1221 18:14:10.493474   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1221 18:14:10.520783   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1221 18:14:10.520840   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1221 18:14:10.547989   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1221 18:14:10.548047   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 18:14:10.576522   55107 provision.go:86] duration metric: configureAuth took 460.066339ms
	I1221 18:14:10.576546   55107 ubuntu.go:193] setting minikube options for container-runtime
	I1221 18:14:10.576725   55107 config.go:182] Loaded profile config "ingress-addon-legacy-310121": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1221 18:14:10.576779   55107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-310121
	I1221 18:14:10.594143   55107 main.go:141] libmachine: Using SSH client type: native
	I1221 18:14:10.594542   55107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1221 18:14:10.594553   55107 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1221 18:14:10.740713   55107 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1221 18:14:10.740731   55107 ubuntu.go:71] root file system type: overlay
	I1221 18:14:10.740833   55107 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1221 18:14:10.740893   55107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-310121
	I1221 18:14:10.759573   55107 main.go:141] libmachine: Using SSH client type: native
	I1221 18:14:10.760000   55107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1221 18:14:10.760082   55107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1221 18:14:10.925030   55107 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1221 18:14:10.925127   55107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-310121
	I1221 18:14:10.943326   55107 main.go:141] libmachine: Using SSH client type: native
	I1221 18:14:10.943822   55107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1221 18:14:10.943848   55107 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1221 18:14:11.748903   55107 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-12-21 18:14:10.920601649 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1221 18:14:11.748947   55107 machine.go:91] provisioned docker machine in 1.997416379s
	I1221 18:14:11.748957   55107 client.go:171] LocalClient.Create took 9.493713002s
	I1221 18:14:11.748971   55107 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-310121" took 9.49376993s
	I1221 18:14:11.748979   55107 start.go:300] post-start starting for "ingress-addon-legacy-310121" (driver="docker")
	I1221 18:14:11.748989   55107 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 18:14:11.749055   55107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 18:14:11.749098   55107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-310121
	I1221 18:14:11.767538   55107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/ingress-addon-legacy-310121/id_rsa Username:docker}
	I1221 18:14:11.873840   55107 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 18:14:11.877922   55107 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 18:14:11.877959   55107 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1221 18:14:11.877970   55107 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1221 18:14:11.877977   55107 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1221 18:14:11.877987   55107 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-2360/.minikube/addons for local assets ...
	I1221 18:14:11.878046   55107 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-2360/.minikube/files for local assets ...
	I1221 18:14:11.878142   55107 filesync.go:149] local asset: /home/jenkins/minikube-integration/17848-2360/.minikube/files/etc/ssl/certs/76602.pem -> 76602.pem in /etc/ssl/certs
	I1221 18:14:11.878154   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/files/etc/ssl/certs/76602.pem -> /etc/ssl/certs/76602.pem
	I1221 18:14:11.878253   55107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 18:14:11.888170   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/files/etc/ssl/certs/76602.pem --> /etc/ssl/certs/76602.pem (1708 bytes)
	I1221 18:14:11.914998   55107 start.go:303] post-start completed in 166.0047ms
	I1221 18:14:11.915409   55107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-310121
	I1221 18:14:11.932614   55107 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/config.json ...
	I1221 18:14:11.932866   55107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 18:14:11.932904   55107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-310121
	I1221 18:14:11.951479   55107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/ingress-addon-legacy-310121/id_rsa Username:docker}
	I1221 18:14:12.053486   55107 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 18:14:12.059244   55107 start.go:128] duration metric: createHost completed in 9.806855273s
	I1221 18:14:12.059267   55107 start.go:83] releasing machines lock for "ingress-addon-legacy-310121", held for 9.80697772s
	I1221 18:14:12.059362   55107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-310121
	I1221 18:14:12.077414   55107 ssh_runner.go:195] Run: cat /version.json
	I1221 18:14:12.077480   55107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-310121
	I1221 18:14:12.077720   55107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 18:14:12.077787   55107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-310121
	I1221 18:14:12.098203   55107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/ingress-addon-legacy-310121/id_rsa Username:docker}
	I1221 18:14:12.098634   55107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/ingress-addon-legacy-310121/id_rsa Username:docker}
	I1221 18:14:12.336129   55107 ssh_runner.go:195] Run: systemctl --version
	I1221 18:14:12.341582   55107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1221 18:14:12.347067   55107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1221 18:14:12.376409   55107 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1221 18:14:12.376488   55107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1221 18:14:12.395836   55107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1221 18:14:12.416343   55107 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1221 18:14:12.416370   55107 start.go:475] detecting cgroup driver to use...
	I1221 18:14:12.416401   55107 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1221 18:14:12.416512   55107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 18:14:12.436697   55107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1221 18:14:12.448740   55107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1221 18:14:12.461114   55107 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1221 18:14:12.461235   55107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1221 18:14:12.473149   55107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1221 18:14:12.484753   55107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1221 18:14:12.496612   55107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1221 18:14:12.508411   55107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 18:14:12.519976   55107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1221 18:14:12.532392   55107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 18:14:12.543406   55107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 18:14:12.554500   55107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:14:12.654063   55107 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1221 18:14:12.780217   55107 start.go:475] detecting cgroup driver to use...
	I1221 18:14:12.780327   55107 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1221 18:14:12.780424   55107 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1221 18:14:12.802521   55107 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1221 18:14:12.802604   55107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1221 18:14:12.818499   55107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 18:14:12.840767   55107 ssh_runner.go:195] Run: which cri-dockerd
	I1221 18:14:12.846159   55107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1221 18:14:12.857707   55107 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1221 18:14:12.883028   55107 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1221 18:14:13.006440   55107 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1221 18:14:13.117978   55107 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1221 18:14:13.118144   55107 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1221 18:14:13.142953   55107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:14:13.247311   55107 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1221 18:14:13.517588   55107 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1221 18:14:13.543269   55107 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1221 18:14:13.572175   55107 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I1221 18:14:13.572284   55107 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-310121 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:14:13.589081   55107 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1221 18:14:13.593464   55107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:14:13.606069   55107 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1221 18:14:13.606139   55107 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1221 18:14:13.626511   55107 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1221 18:14:13.626529   55107 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1221 18:14:13.626592   55107 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1221 18:14:13.636705   55107 ssh_runner.go:195] Run: which lz4
	I1221 18:14:13.640846   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1221 18:14:13.640945   55107 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1221 18:14:13.645127   55107 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1221 18:14:13.645162   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I1221 18:14:15.600771   55107 docker.go:635] Took 1.959863 seconds to copy over tarball
	I1221 18:14:15.600849   55107 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1221 18:14:18.017826   55107 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.416948377s)
	I1221 18:14:18.017858   55107 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1221 18:14:18.319096   55107 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1221 18:14:18.329786   55107 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1221 18:14:18.351480   55107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:14:18.455273   55107 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1221 18:14:20.700684   55107 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.24537594s)
	I1221 18:14:20.700772   55107 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1221 18:14:20.721780   55107 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1221 18:14:20.721796   55107 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1221 18:14:20.721805   55107 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1221 18:14:20.723515   55107 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1221 18:14:20.723667   55107 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1221 18:14:20.723877   55107 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1221 18:14:20.723967   55107 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1221 18:14:20.724036   55107 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1221 18:14:20.724102   55107 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1221 18:14:20.724162   55107 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1221 18:14:20.724309   55107 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:14:20.725279   55107 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1221 18:14:20.725681   55107 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1221 18:14:20.725827   55107 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:14:20.725833   55107 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1221 18:14:20.725895   55107 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1221 18:14:20.726019   55107 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1221 18:14:20.726057   55107 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1221 18:14:20.726100   55107 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	W1221 18:14:21.045442   55107 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1221 18:14:21.045683   55107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1221 18:14:21.076133   55107 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1221 18:14:21.076223   55107 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1221 18:14:21.076298   55107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	W1221 18:14:21.078997   55107 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1221 18:14:21.079148   55107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1221 18:14:21.101489   55107 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1221 18:14:21.101592   55107 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-2360/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1221 18:14:21.101769   55107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1221 18:14:21.103214   55107 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1221 18:14:21.103427   55107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1221 18:14:21.106914   55107 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1221 18:14:21.107131   55107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1221 18:14:21.107936   55107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1221 18:14:21.117162   55107 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1221 18:14:21.117378   55107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1221 18:14:21.135483   55107 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1221 18:14:21.135534   55107 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1221 18:14:21.135591   55107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1221 18:14:21.158192   55107 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1221 18:14:21.158241   55107 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1221 18:14:21.158309   55107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1221 18:14:21.169890   55107 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1221 18:14:21.169940   55107 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1221 18:14:21.169993   55107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1221 18:14:21.170304   55107 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1221 18:14:21.170334   55107 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1221 18:14:21.170378   55107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1221 18:14:21.211226   55107 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1221 18:14:21.211276   55107 docker.go:323] Removing image: registry.k8s.io/pause:3.2
	I1221 18:14:21.211323   55107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1221 18:14:21.224314   55107 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1221 18:14:21.224405   55107 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
	I1221 18:14:21.224490   55107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1221 18:14:21.251730   55107 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-2360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1221 18:14:21.252143   55107 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-2360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1221 18:14:21.254190   55107 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-2360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1221 18:14:21.254545   55107 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-2360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1221 18:14:21.264902   55107 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-2360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1221 18:14:21.273973   55107 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-2360/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W1221 18:14:21.282998   55107 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1221 18:14:21.283182   55107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:14:21.304077   55107 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1221 18:14:21.304167   55107 docker.go:323] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:14:21.304244   55107 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:14:21.337811   55107 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-2360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1221 18:14:21.337890   55107 cache_images.go:92] LoadImages completed in 616.07323ms
	W1221 18:14:21.337963   55107 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17848-2360/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1221 18:14:21.338028   55107 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1221 18:14:21.398795   55107 cni.go:84] Creating CNI manager for ""
	I1221 18:14:21.398828   55107 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1221 18:14:21.398847   55107 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1221 18:14:21.398877   55107 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-310121 NodeName:ingress-addon-legacy-310121 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1221 18:14:21.399015   55107 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-310121"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 18:14:21.399078   55107 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-310121 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-310121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1221 18:14:21.399145   55107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1221 18:14:21.409608   55107 binaries.go:44] Found k8s binaries, skipping transfer
	I1221 18:14:21.409675   55107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 18:14:21.420308   55107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1221 18:14:21.441777   55107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1221 18:14:21.462654   55107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1221 18:14:21.483894   55107 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1221 18:14:21.488127   55107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:14:21.501447   55107 certs.go:56] Setting up /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121 for IP: 192.168.49.2
	I1221 18:14:21.501479   55107 certs.go:190] acquiring lock for shared ca certs: {Name:mke521584ecf21f65224996fffab5af98b398a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:21.501651   55107 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17848-2360/.minikube/ca.key
	I1221 18:14:21.501702   55107 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.key
	I1221 18:14:21.501756   55107 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.key
	I1221 18:14:21.501769   55107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt with IP's: []
	I1221 18:14:21.982352   55107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt ...
	I1221 18:14:21.982384   55107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: {Name:mk38ffdd769ea8ab739fae1e9220a16d6f024646 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:21.982574   55107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.key ...
	I1221 18:14:21.982596   55107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.key: {Name:mkd56682ae65e513c818a933c4b42609c5c884ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:21.982684   55107 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.key.dd3b5fb2
	I1221 18:14:21.982709   55107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1221 18:14:22.385097   55107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.crt.dd3b5fb2 ...
	I1221 18:14:22.385131   55107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.crt.dd3b5fb2: {Name:mkc5c31f0e9a4355c630ba0d712fa9c70cf4071c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:22.385309   55107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.key.dd3b5fb2 ...
	I1221 18:14:22.385329   55107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.key.dd3b5fb2: {Name:mkd15f814112815970d17351ea4b8dd76e14bc3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:22.385402   55107 certs.go:337] copying /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.crt
	I1221 18:14:22.385479   55107 certs.go:341] copying /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.key
	I1221 18:14:22.385536   55107 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/proxy-client.key
	I1221 18:14:22.385553   55107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/proxy-client.crt with IP's: []
	I1221 18:14:22.618260   55107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/proxy-client.crt ...
	I1221 18:14:22.618288   55107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/proxy-client.crt: {Name:mk1cb8682f500a345144e2d640b637201b913c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:22.618457   55107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/proxy-client.key ...
	I1221 18:14:22.618469   55107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/proxy-client.key: {Name:mkdf84eb9ca7acfd001743f879a40a4d4b4a12a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:22.618542   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1221 18:14:22.618565   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1221 18:14:22.618581   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1221 18:14:22.618599   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1221 18:14:22.618613   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1221 18:14:22.618625   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1221 18:14:22.618639   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1221 18:14:22.618653   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1221 18:14:22.618712   55107 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/7660.pem (1338 bytes)
	W1221 18:14:22.618747   55107 certs.go:433] ignoring /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/7660_empty.pem, impossibly tiny 0 bytes
	I1221 18:14:22.618761   55107 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 18:14:22.618794   55107 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem (1082 bytes)
	I1221 18:14:22.618822   55107 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/cert.pem (1123 bytes)
	I1221 18:14:22.618849   55107 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/key.pem (1675 bytes)
	I1221 18:14:22.618895   55107 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17848-2360/.minikube/files/etc/ssl/certs/76602.pem (1708 bytes)
	I1221 18:14:22.618926   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/files/etc/ssl/certs/76602.pem -> /usr/share/ca-certificates/76602.pem
	I1221 18:14:22.618951   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:14:22.618966   55107 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/7660.pem -> /usr/share/ca-certificates/7660.pem
	I1221 18:14:22.619561   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1221 18:14:22.647150   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 18:14:22.674187   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 18:14:22.701473   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1221 18:14:22.728697   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 18:14:22.755669   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1221 18:14:22.783316   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 18:14:22.810048   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1221 18:14:22.837004   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/files/etc/ssl/certs/76602.pem --> /usr/share/ca-certificates/76602.pem (1708 bytes)
	I1221 18:14:22.864978   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 18:14:22.893310   55107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/certs/7660.pem --> /usr/share/ca-certificates/7660.pem (1338 bytes)
	I1221 18:14:22.921771   55107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1221 18:14:22.942685   55107 ssh_runner.go:195] Run: openssl version
	I1221 18:14:22.949675   55107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76602.pem && ln -fs /usr/share/ca-certificates/76602.pem /etc/ssl/certs/76602.pem"
	I1221 18:14:22.961193   55107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76602.pem
	I1221 18:14:22.965592   55107 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 21 18:09 /usr/share/ca-certificates/76602.pem
	I1221 18:14:22.965657   55107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76602.pem
	I1221 18:14:22.974045   55107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76602.pem /etc/ssl/certs/3ec20f2e.0"
	I1221 18:14:22.985231   55107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1221 18:14:22.996174   55107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:14:23.000540   55107 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 21 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:14:23.000621   55107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:14:23.008866   55107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1221 18:14:23.020528   55107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7660.pem && ln -fs /usr/share/ca-certificates/7660.pem /etc/ssl/certs/7660.pem"
	I1221 18:14:23.031657   55107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7660.pem
	I1221 18:14:23.036011   55107 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 21 18:09 /usr/share/ca-certificates/7660.pem
	I1221 18:14:23.036071   55107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7660.pem
	I1221 18:14:23.044484   55107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7660.pem /etc/ssl/certs/51391683.0"
	I1221 18:14:23.055759   55107 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1221 18:14:23.059942   55107 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1221 18:14:23.060007   55107 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-310121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-310121 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:14:23.060146   55107 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1221 18:14:23.079885   55107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 18:14:23.090576   55107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 18:14:23.100990   55107 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1221 18:14:23.101063   55107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 18:14:23.111756   55107 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 18:14:23.111797   55107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 18:14:23.168459   55107 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1221 18:14:23.168724   55107 kubeadm.go:322] [preflight] Running pre-flight checks
	I1221 18:14:23.378154   55107 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1221 18:14:23.378224   55107 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1221 18:14:23.378324   55107 kubeadm.go:322] DOCKER_VERSION: 24.0.7
	I1221 18:14:23.378405   55107 kubeadm.go:322] OS: Linux
	I1221 18:14:23.378486   55107 kubeadm.go:322] CGROUPS_CPU: enabled
	I1221 18:14:23.378568   55107 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1221 18:14:23.378660   55107 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1221 18:14:23.378744   55107 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1221 18:14:23.378841   55107 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1221 18:14:23.378897   55107 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1221 18:14:23.466866   55107 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 18:14:23.466970   55107 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 18:14:23.467065   55107 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1221 18:14:23.682148   55107 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 18:14:23.682359   55107 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 18:14:23.682443   55107 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1221 18:14:23.792699   55107 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 18:14:23.797001   55107 out.go:204]   - Generating certificates and keys ...
	I1221 18:14:23.797170   55107 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1221 18:14:23.797250   55107 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1221 18:14:24.686900   55107 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 18:14:25.243717   55107 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1221 18:14:25.684202   55107 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1221 18:14:26.320036   55107 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1221 18:14:26.813778   55107 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1221 18:14:26.814065   55107 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-310121 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1221 18:14:27.985312   55107 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1221 18:14:27.985611   55107 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-310121 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1221 18:14:28.820666   55107 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 18:14:29.196659   55107 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 18:14:29.481357   55107 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1221 18:14:29.481699   55107 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 18:14:29.788495   55107 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 18:14:30.240527   55107 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 18:14:31.040785   55107 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 18:14:31.706850   55107 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 18:14:31.707766   55107 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 18:14:31.710182   55107 out.go:204]   - Booting up control plane ...
	I1221 18:14:31.710281   55107 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 18:14:31.717821   55107 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 18:14:31.720033   55107 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 18:14:31.721482   55107 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 18:14:31.724549   55107 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1221 18:14:43.727312   55107 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002346 seconds
	I1221 18:14:43.727448   55107 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 18:14:43.742274   55107 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 18:14:44.266446   55107 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 18:14:44.266590   55107 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-310121 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1221 18:14:44.774569   55107 kubeadm.go:322] [bootstrap-token] Using token: 2bryv2.vrkpdye13v2fnn2r
	I1221 18:14:44.776812   55107 out.go:204]   - Configuring RBAC rules ...
	I1221 18:14:44.776932   55107 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 18:14:44.781282   55107 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 18:14:44.789065   55107 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 18:14:44.793934   55107 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 18:14:44.796657   55107 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 18:14:44.799211   55107 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 18:14:44.812573   55107 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 18:14:45.119185   55107 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1221 18:14:45.229330   55107 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1221 18:14:45.231079   55107 kubeadm.go:322] 
	I1221 18:14:45.231149   55107 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1221 18:14:45.231156   55107 kubeadm.go:322] 
	I1221 18:14:45.231229   55107 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1221 18:14:45.231234   55107 kubeadm.go:322] 
	I1221 18:14:45.231258   55107 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1221 18:14:45.231713   55107 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 18:14:45.231768   55107 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 18:14:45.231773   55107 kubeadm.go:322] 
	I1221 18:14:45.231824   55107 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1221 18:14:45.231914   55107 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 18:14:45.231980   55107 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 18:14:45.231985   55107 kubeadm.go:322] 
	I1221 18:14:45.232274   55107 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1221 18:14:45.232353   55107 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1221 18:14:45.232358   55107 kubeadm.go:322] 
	I1221 18:14:45.232649   55107 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2bryv2.vrkpdye13v2fnn2r \
	I1221 18:14:45.232753   55107 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2f6b4ffdbf866a02d45b3983f1bb1aea5de717f3ff658b4572e7c4ad93c2235b \
	I1221 18:14:45.234672   55107 kubeadm.go:322]     --control-plane 
	I1221 18:14:45.234688   55107 kubeadm.go:322] 
	I1221 18:14:45.235893   55107 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1221 18:14:45.235906   55107 kubeadm.go:322] 
	I1221 18:14:45.236212   55107 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2bryv2.vrkpdye13v2fnn2r \
	I1221 18:14:45.238192   55107 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2f6b4ffdbf866a02d45b3983f1bb1aea5de717f3ff658b4572e7c4ad93c2235b 
	I1221 18:14:45.245610   55107 kubeadm.go:322] W1221 18:14:23.167682    1655 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1221 18:14:45.245790   55107 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1221 18:14:45.245912   55107 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1221 18:14:45.246110   55107 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1221 18:14:45.246208   55107 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 18:14:45.246328   55107 kubeadm.go:322] W1221 18:14:31.718135    1655 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1221 18:14:45.246456   55107 kubeadm.go:322] W1221 18:14:31.720098    1655 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1221 18:14:45.246470   55107 cni.go:84] Creating CNI manager for ""
	I1221 18:14:45.246485   55107 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1221 18:14:45.246503   55107 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1221 18:14:45.246626   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:45.246697   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=053db14b71765e8eac0607e1192d5903e3b3dcea minikube.k8s.io/name=ingress-addon-legacy-310121 minikube.k8s.io/updated_at=2023_12_21T18_14_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:45.768425   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:45.768482   55107 ops.go:34] apiserver oom_adj: -16
	I1221 18:14:46.269182   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:46.769415   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:47.269252   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:47.768469   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:48.269491   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:48.768726   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:49.269386   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:49.768587   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:50.269449   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:50.769357   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:51.269080   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:51.768584   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:52.269323   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:52.768481   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:53.268541   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:53.769339   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:54.269282   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:54.769363   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:55.268922   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:55.768611   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:56.269162   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:56.769115   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:57.269344   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:57.768585   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:58.268538   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:58.769212   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:59.268775   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:14:59.768891   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:00.268593   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:00.768566   55107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:01.023558   55107 kubeadm.go:1088] duration metric: took 15.776980663s to wait for elevateKubeSystemPrivileges.
	I1221 18:15:01.023587   55107 kubeadm.go:406] StartCluster complete in 37.963599456s
	I1221 18:15:01.023603   55107 settings.go:142] acquiring lock: {Name:mk8f5959956e96f0518268d8a4693f16253e6146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:15:01.023660   55107 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17848-2360/kubeconfig
	I1221 18:15:01.024435   55107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/kubeconfig: {Name:mkd5570705146782261fe0b7e76619864f470748 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:15:01.025223   55107 kapi.go:59] client config for ingress-addon-legacy-310121: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt", KeyFile:"/home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.key", CAFile:"/home/jenkins/minikube-integration/17848-2360/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 18:15:01.026365   55107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1221 18:15:01.026630   55107 config.go:182] Loaded profile config "ingress-addon-legacy-310121": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1221 18:15:01.026681   55107 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1221 18:15:01.026746   55107 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-310121"
	I1221 18:15:01.026759   55107 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-310121"
	I1221 18:15:01.026794   55107 host.go:66] Checking if "ingress-addon-legacy-310121" exists ...
	I1221 18:15:01.027302   55107 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-310121 --format={{.State.Status}}
	I1221 18:15:01.028093   55107 cert_rotation.go:137] Starting client certificate rotation controller
	I1221 18:15:01.028152   55107 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-310121"
	I1221 18:15:01.028180   55107 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-310121"
	I1221 18:15:01.028505   55107 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-310121 --format={{.State.Status}}
	I1221 18:15:01.087912   55107 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:15:01.089889   55107 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 18:15:01.089909   55107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 18:15:01.089972   55107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-310121
	I1221 18:15:01.092269   55107 kapi.go:59] client config for ingress-addon-legacy-310121: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt", KeyFile:"/home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.key", CAFile:"/home/jenkins/minikube-integration/17848-2360/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 18:15:01.092561   55107 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-310121"
	I1221 18:15:01.092600   55107 host.go:66] Checking if "ingress-addon-legacy-310121" exists ...
	I1221 18:15:01.093080   55107 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-310121 --format={{.State.Status}}
	I1221 18:15:01.147077   55107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/ingress-addon-legacy-310121/id_rsa Username:docker}
	I1221 18:15:01.153964   55107 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 18:15:01.153985   55107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 18:15:01.154051   55107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-310121
	I1221 18:15:01.184232   55107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/ingress-addon-legacy-310121/id_rsa Username:docker}
	I1221 18:15:01.393716   55107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 18:15:01.410322   55107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1221 18:15:01.424324   55107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 18:15:01.542832   55107 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-310121" context rescaled to 1 replicas
	I1221 18:15:01.542911   55107 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1221 18:15:01.545925   55107 out.go:177] * Verifying Kubernetes components...
	I1221 18:15:01.548367   55107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:15:02.309414   55107 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1221 18:15:02.350873   55107 kapi.go:59] client config for ingress-addon-legacy-310121: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt", KeyFile:"/home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.key", CAFile:"/home/jenkins/minikube-integration/17848-2360/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 18:15:02.351233   55107 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-310121" to be "Ready" ...
	I1221 18:15:02.358691   55107 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1221 18:15:02.357058   55107 node_ready.go:49] node "ingress-addon-legacy-310121" has status "Ready":"True"
	I1221 18:15:02.358797   55107 node_ready.go:38] duration metric: took 7.525896ms waiting for node "ingress-addon-legacy-310121" to be "Ready" ...
	I1221 18:15:02.358850   55107 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1221 18:15:02.361473   55107 addons.go:508] enable addons completed in 1.334789085s: enabled=[storage-provisioner default-storageclass]
	I1221 18:15:02.367267   55107 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-w2452" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:04.377881   55107 pod_ready.go:102] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:06.873657   55107 pod_ready.go:102] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:08.873848   55107 pod_ready.go:102] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:11.372875   55107 pod_ready.go:102] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:13.873501   55107 pod_ready.go:102] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:16.373196   55107 pod_ready.go:102] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:18.872954   55107 pod_ready.go:102] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:20.873192   55107 pod_ready.go:102] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:22.873679   55107 pod_ready.go:102] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:25.372546   55107 pod_ready.go:102] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:27.373655   55107 pod_ready.go:102] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:29.872656   55107 pod_ready.go:102] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:31.873070   55107 pod_ready.go:102] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:33.872532   55107 pod_ready.go:92] pod "coredns-66bff467f8-w2452" in "kube-system" namespace has status "Ready":"True"
	I1221 18:15:33.872558   55107 pod_ready.go:81] duration metric: took 31.505261807s waiting for pod "coredns-66bff467f8-w2452" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:33.872568   55107 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-310121" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:33.877097   55107 pod_ready.go:92] pod "etcd-ingress-addon-legacy-310121" in "kube-system" namespace has status "Ready":"True"
	I1221 18:15:33.877120   55107 pod_ready.go:81] duration metric: took 4.544139ms waiting for pod "etcd-ingress-addon-legacy-310121" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:33.877131   55107 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-310121" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:33.881764   55107 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-310121" in "kube-system" namespace has status "Ready":"True"
	I1221 18:15:33.881789   55107 pod_ready.go:81] duration metric: took 4.623918ms waiting for pod "kube-apiserver-ingress-addon-legacy-310121" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:33.881805   55107 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-310121" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:33.886323   55107 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-310121" in "kube-system" namespace has status "Ready":"True"
	I1221 18:15:33.886346   55107 pod_ready.go:81] duration metric: took 4.532717ms waiting for pod "kube-controller-manager-ingress-addon-legacy-310121" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:33.886357   55107 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mhzzm" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:33.890759   55107 pod_ready.go:92] pod "kube-proxy-mhzzm" in "kube-system" namespace has status "Ready":"True"
	I1221 18:15:33.890783   55107 pod_ready.go:81] duration metric: took 4.400949ms waiting for pod "kube-proxy-mhzzm" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:33.890794   55107 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-310121" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:34.068177   55107 request.go:629] Waited for 177.264334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-310121
	I1221 18:15:34.268204   55107 request.go:629] Waited for 197.352896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-310121
	I1221 18:15:34.271103   55107 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-310121" in "kube-system" namespace has status "Ready":"True"
	I1221 18:15:34.271130   55107 pod_ready.go:81] duration metric: took 380.329123ms waiting for pod "kube-scheduler-ingress-addon-legacy-310121" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:34.271151   55107 pod_ready.go:38] duration metric: took 31.91227273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1221 18:15:34.271169   55107 api_server.go:52] waiting for apiserver process to appear ...
	I1221 18:15:34.271249   55107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 18:15:34.284262   55107 api_server.go:72] duration metric: took 32.741306932s to wait for apiserver process to appear ...
	I1221 18:15:34.284292   55107 api_server.go:88] waiting for apiserver healthz status ...
	I1221 18:15:34.284313   55107 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1221 18:15:34.292892   55107 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1221 18:15:34.293827   55107 api_server.go:141] control plane version: v1.18.20
	I1221 18:15:34.293849   55107 api_server.go:131] duration metric: took 9.550218ms to wait for apiserver health ...
	I1221 18:15:34.293857   55107 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 18:15:34.468203   55107 request.go:629] Waited for 174.282501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1221 18:15:34.473593   55107 system_pods.go:59] 7 kube-system pods found
	I1221 18:15:34.473625   55107 system_pods.go:61] "coredns-66bff467f8-w2452" [f8c1ee9a-4030-4080-a543-e3f2da79c4aa] Running
	I1221 18:15:34.473632   55107 system_pods.go:61] "etcd-ingress-addon-legacy-310121" [de8ae9dd-aa83-4645-a96c-3aa6eb94221d] Running
	I1221 18:15:34.473638   55107 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-310121" [e2485921-0bf3-4359-929b-e9eea9bbfd38] Running
	I1221 18:15:34.473646   55107 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-310121" [73043d47-fcf6-469f-b29d-6acaf4a085cf] Running
	I1221 18:15:34.473651   55107 system_pods.go:61] "kube-proxy-mhzzm" [ec1c30af-f267-46db-91f5-ec50e2bd16aa] Running
	I1221 18:15:34.473656   55107 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-310121" [bca28f15-2860-4b1b-8e6b-8f72677c9e9b] Running
	I1221 18:15:34.473662   55107 system_pods.go:61] "storage-provisioner" [4b7301aa-fc79-450f-bf7c-ea690c5805e4] Running
	I1221 18:15:34.473667   55107 system_pods.go:74] duration metric: took 179.804961ms to wait for pod list to return data ...
	I1221 18:15:34.473679   55107 default_sa.go:34] waiting for default service account to be created ...
	I1221 18:15:34.667983   55107 request.go:629] Waited for 194.20132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1221 18:15:34.670732   55107 default_sa.go:45] found service account: "default"
	I1221 18:15:34.670800   55107 default_sa.go:55] duration metric: took 197.110774ms for default service account to be created ...
	I1221 18:15:34.670838   55107 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 18:15:34.867909   55107 request.go:629] Waited for 196.997952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1221 18:15:34.874980   55107 system_pods.go:86] 7 kube-system pods found
	I1221 18:15:34.875058   55107 system_pods.go:89] "coredns-66bff467f8-w2452" [f8c1ee9a-4030-4080-a543-e3f2da79c4aa] Running
	I1221 18:15:34.875079   55107 system_pods.go:89] "etcd-ingress-addon-legacy-310121" [de8ae9dd-aa83-4645-a96c-3aa6eb94221d] Running
	I1221 18:15:34.875097   55107 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-310121" [e2485921-0bf3-4359-929b-e9eea9bbfd38] Running
	I1221 18:15:34.875136   55107 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-310121" [73043d47-fcf6-469f-b29d-6acaf4a085cf] Running
	I1221 18:15:34.875160   55107 system_pods.go:89] "kube-proxy-mhzzm" [ec1c30af-f267-46db-91f5-ec50e2bd16aa] Running
	I1221 18:15:34.875179   55107 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-310121" [bca28f15-2860-4b1b-8e6b-8f72677c9e9b] Running
	I1221 18:15:34.875211   55107 system_pods.go:89] "storage-provisioner" [4b7301aa-fc79-450f-bf7c-ea690c5805e4] Running
	I1221 18:15:34.875242   55107 system_pods.go:126] duration metric: took 204.382538ms to wait for k8s-apps to be running ...
	I1221 18:15:34.875263   55107 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 18:15:34.875390   55107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:15:34.898131   55107 system_svc.go:56] duration metric: took 22.859535ms WaitForService to wait for kubelet.
	I1221 18:15:34.898211   55107 kubeadm.go:581] duration metric: took 33.355252731s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1221 18:15:34.898236   55107 node_conditions.go:102] verifying NodePressure condition ...
	I1221 18:15:35.068490   55107 request.go:629] Waited for 170.184089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1221 18:15:35.071218   55107 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1221 18:15:35.071259   55107 node_conditions.go:123] node cpu capacity is 2
	I1221 18:15:35.071270   55107 node_conditions.go:105] duration metric: took 173.027917ms to run NodePressure ...
	I1221 18:15:35.071280   55107 start.go:228] waiting for startup goroutines ...
	I1221 18:15:35.071287   55107 start.go:233] waiting for cluster config update ...
	I1221 18:15:35.071303   55107 start.go:242] writing updated cluster config ...
	I1221 18:15:35.071614   55107 ssh_runner.go:195] Run: rm -f paused
	I1221 18:15:35.133727   55107 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I1221 18:15:35.136373   55107 out.go:177] 
	W1221 18:15:35.138376   55107 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I1221 18:15:35.140124   55107 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1221 18:15:35.142144   55107 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-310121" cluster and "default" namespace by default
	
	
	==> Docker <==
	Dec 21 18:14:20 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:14:20.671698799Z" level=info msg="Daemon has completed initialization"
	Dec 21 18:14:20 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:14:20.698585979Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 21 18:14:20 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:14:20.698784136Z" level=info msg="API listen on [::]:2376"
	Dec 21 18:14:20 ingress-addon-legacy-310121 systemd[1]: Started Docker Application Container Engine.
	Dec 21 18:15:36 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:15:36.693293139Z" level=warning msg="reference for unknown type: " digest="sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" remote="docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"
	Dec 21 18:15:38 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:15:38.153584909Z" level=info msg="ignoring event" container=2b97db5ececa3b1808faa9054830610bcb201e6bf64c2f07e214543483345623 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:15:38 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:15:38.174494924Z" level=info msg="ignoring event" container=7ad553235ca73b080ef4a943f9038b3bf558a8b6a08f434c709d928755db9f05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:15:38 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:15:38.359237550Z" level=info msg="ignoring event" container=640b46d032172ffcc26248185c8cbb60355ae2f3d37180f9917039091c457856 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:15:38 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:15:38.523288569Z" level=info msg="ignoring event" container=aa752420228548893390551dd93bcefe1c9325a8901f3d3871791c6e9785d26e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:15:39 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:15:39.387291678Z" level=info msg="ignoring event" container=4a971e979bbbde16ea748f3009aa28ef0f72670564d2cae6ad525ba294a8b0dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:15:40 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:15:40.217947740Z" level=warning msg="reference for unknown type: " digest="sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324" remote="registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324"
	Dec 21 18:15:47 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:15:47.607391706Z" level=warning msg="Published ports are discarded when using host network mode"
	Dec 21 18:15:47 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:15:47.638138835Z" level=warning msg="Published ports are discarded when using host network mode"
	Dec 21 18:15:47 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:15:47.773495545Z" level=warning msg="reference for unknown type: " digest="sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" remote="docker.io/cryptexlabs/minikube-ingress-dns@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"
	Dec 21 18:15:53 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:15:53.716286652Z" level=info msg="ignoring event" container=2b98bc0add0d8de4bf821f6c1e9c59e84d6e97558c5dc03f95f9e7e4c5167a13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:15:54 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:15:54.590728583Z" level=info msg="ignoring event" container=78c7976291a78739224a8877956a65fa0e244f095370dc68c754a294b7530034 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:16:09 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:16:09.134759031Z" level=info msg="ignoring event" container=c14b55cde22fee8f16465e208aaf9fb7d0a1cf0f9e4b586a5dfd0828e35bb595 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:16:11 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:16:11.889400731Z" level=info msg="ignoring event" container=2256750e8c54acb733bc880257f1f9ed8d8d9f37b2848eac0f09899bc1f28040 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:16:12 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:16:12.938963764Z" level=info msg="ignoring event" container=11fed94a21d75a5aa7cb5277bc8781cab2a05e1a3c8cef98b8c60f5b9fc439e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:16:27 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:16:27.004622025Z" level=info msg="ignoring event" container=cb43e2895d6b6423f6c0ebbb126767e918a7dffb9f13d31f9e22cd5c5edb4f30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:16:28 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:16:28.092368534Z" level=info msg="ignoring event" container=abf8bfa1f2db8cb9feecbe088a508eacf20a747e51dad86f51b734934f26250a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:16:31 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:16:31.866664653Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=bf94ef3feada89d484b41d2f9482e04c874bc23d9dad8f966326e04e5f8084c3
	Dec 21 18:16:31 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:16:31.881975966Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=bf94ef3feada89d484b41d2f9482e04c874bc23d9dad8f966326e04e5f8084c3
	Dec 21 18:16:31 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:16:31.951458022Z" level=info msg="ignoring event" container=bf94ef3feada89d484b41d2f9482e04c874bc23d9dad8f966326e04e5f8084c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 21 18:16:32 ingress-addon-legacy-310121 dockerd[1302]: time="2023-12-21T18:16:32.023108489Z" level=info msg="ignoring event" container=feb6f4d2f4ea57b9cabe4e9a3248faad9a72f1a675c2c6497eeca0a21e187179 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	abf8bfa1f2db8       dd1b12fcb6097                                                                                                      10 seconds ago       Exited              hello-world-app           2                   511a2df220643       hello-world-app-5f5d8b66bb-x9zv9
	66641e6888136       nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59                                      36 seconds ago       Running             nginx                     0                   09b207da27756       nginx
	bf94ef3feada8       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   52 seconds ago       Exited              controller                0                   feb6f4d2f4ea5       ingress-nginx-controller-7fcf777cb7-htv96
	aa75242022854       a883f7fc35610                                                                                                      59 seconds ago       Exited              patch                     1                   4a971e979bbbd       ingress-nginx-admission-patch-swsll
	7ad553235ca73       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   640b46d032172       ingress-nginx-admission-create-9wrh5
	e89ed4a37aa8d       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   a98ea441d7ea9       storage-provisioner
	20dac4ec66a6f       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   cc781d395c32c       coredns-66bff467f8-w2452
	6f03d2a511007       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   5a551becfd699       kube-proxy-mhzzm
	25553d384b8e4       ab707b0a0ea33                                                                                                      2 minutes ago        Running             etcd                      0                   ce4e8f3a40008       etcd-ingress-addon-legacy-310121
	deaf7eeed95c6       68a4fac29a865                                                                                                      2 minutes ago        Running             kube-controller-manager   0                   cf8c31b0f2908       kube-controller-manager-ingress-addon-legacy-310121
	e208e55d270ff       095f37015706d                                                                                                      2 minutes ago        Running             kube-scheduler            0                   633c662c2ad85       kube-scheduler-ingress-addon-legacy-310121
	0b181d95b2e0d       2694cf044d665                                                                                                      2 minutes ago        Running             kube-apiserver            0                   ff53c4a5e4575       kube-apiserver-ingress-addon-legacy-310121
	
	
	==> coredns [20dac4ec66a6] <==
	[INFO] 172.17.0.1:2926 - 4058 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051209s
	[INFO] 172.17.0.1:53361 - 16418 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000046811s
	[INFO] 172.17.0.1:2926 - 838 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055434s
	[INFO] 172.17.0.1:53361 - 52285 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040345s
	[INFO] 172.17.0.1:2926 - 22484 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005806s
	[INFO] 172.17.0.1:53361 - 31897 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041535s
	[INFO] 172.17.0.1:2926 - 55633 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039713s
	[INFO] 172.17.0.1:3240 - 24192 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000060342s
	[INFO] 172.17.0.1:53361 - 1725 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036792s
	[INFO] 172.17.0.1:3240 - 32891 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062196s
	[INFO] 172.17.0.1:3240 - 26374 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063788s
	[INFO] 172.17.0.1:3240 - 31487 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056952s
	[INFO] 172.17.0.1:3240 - 45076 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052448s
	[INFO] 172.17.0.1:2926 - 49713 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00007187s
	[INFO] 172.17.0.1:53361 - 29397 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000076899s
	[INFO] 172.17.0.1:3240 - 60085 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001424819s
	[INFO] 172.17.0.1:53361 - 32622 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040173s
	[INFO] 172.17.0.1:3240 - 16215 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001275065s
	[INFO] 172.17.0.1:2926 - 58820 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002435625s
	[INFO] 172.17.0.1:53361 - 17298 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002012151s
	[INFO] 172.17.0.1:3240 - 49084 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000313244s
	[INFO] 172.17.0.1:2926 - 36455 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001001732s
	[INFO] 172.17.0.1:53361 - 64348 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001337408s
	[INFO] 172.17.0.1:2926 - 29488 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059259s
	[INFO] 172.17.0.1:53361 - 57767 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058758s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-310121
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-310121
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=053db14b71765e8eac0607e1192d5903e3b3dcea
	                    minikube.k8s.io/name=ingress-addon-legacy-310121
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_21T18_14_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 21 Dec 2023 18:14:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-310121
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 21 Dec 2023 18:16:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 21 Dec 2023 18:16:19 +0000   Thu, 21 Dec 2023 18:14:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 21 Dec 2023 18:16:19 +0000   Thu, 21 Dec 2023 18:14:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 21 Dec 2023 18:16:19 +0000   Thu, 21 Dec 2023 18:14:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 21 Dec 2023 18:16:19 +0000   Thu, 21 Dec 2023 18:14:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-310121
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f074e23b21643f980f9fe4871387107
	  System UUID:                49305ba1-c86a-42b0-99af-abf911917aff
	  Boot ID:                    d56f90bc-750b-4b43-9ef5-5f30682d0582
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-x9zv9                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 coredns-66bff467f8-w2452                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     97s
	  kube-system                 etcd-ingress-addon-legacy-310121                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-apiserver-ingress-addon-legacy-310121             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-310121    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-proxy-mhzzm                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-scheduler-ingress-addon-legacy-310121             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (0%!)(MISSING)   170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  2m3s (x5 over 2m3s)  kubelet     Node ingress-addon-legacy-310121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x5 over 2m3s)  kubelet     Node ingress-addon-legacy-310121 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x4 over 2m3s)  kubelet     Node ingress-addon-legacy-310121 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  109s                 kubelet     Node ingress-addon-legacy-310121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s                 kubelet     Node ingress-addon-legacy-310121 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s                 kubelet     Node ingress-addon-legacy-310121 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                98s                  kubelet     Node ingress-addon-legacy-310121 status is now: NodeReady
	  Normal  Starting                 95s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000846] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.001012] FS-Cache: N-cookie d=00000000c829643b{9p.inode} n=0000000048d6544c
	[  +0.001091] FS-Cache: N-key=[8] '876ced0000000000'
	[  +0.007682] FS-Cache: Duplicate cookie detected
	[  +0.000740] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000989] FS-Cache: O-cookie d=00000000c829643b{9p.inode} n=000000009dc50240
	[  +0.001070] FS-Cache: O-key=[8] '876ced0000000000'
	[  +0.000752] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000965] FS-Cache: N-cookie d=00000000c829643b{9p.inode} n=0000000001d80add
	[  +0.001068] FS-Cache: N-key=[8] '876ced0000000000'
	[  +2.275523] FS-Cache: Duplicate cookie detected
	[  +0.000752] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000c829643b{9p.inode} n=000000003ef2519c
	[  +0.001308] FS-Cache: O-key=[8] '866ced0000000000'
	[  +0.000739] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000967] FS-Cache: N-cookie d=00000000c829643b{9p.inode} n=0000000048d6544c
	[  +0.001086] FS-Cache: N-key=[8] '866ced0000000000'
	[  +0.439080] FS-Cache: Duplicate cookie detected
	[  +0.000740] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001045] FS-Cache: O-cookie d=00000000c829643b{9p.inode} n=00000000f31ba02f
	[  +0.001090] FS-Cache: O-key=[8] '8f6ced0000000000'
	[  +0.000736] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=00000000c829643b{9p.inode} n=000000005099a8f0
	[  +0.001113] FS-Cache: N-key=[8] '8f6ced0000000000'
	[Dec21 18:14] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [25553d384b8e] <==
	raft2023/12/21 18:14:36 INFO: aec36adc501070cc became follower at term 0
	raft2023/12/21 18:14:36 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/21 18:14:36 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/21 18:14:36 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-21 18:14:36.600793 W | auth: simple token is not cryptographically signed
	2023-12-21 18:14:36.610535 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-21 18:14:36.845576 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-21 18:14:36.845974 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-21 18:14:36.846469 I | embed: listening for peers on 192.168.49.2:2380
	2023-12-21 18:14:36.846947 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/21 18:14:36 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-21 18:14:36.847527 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/12/21 18:14:37 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/21 18:14:37 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/21 18:14:37 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/21 18:14:37 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/21 18:14:37 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-21 18:14:37.996389 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-21 18:14:37.996979 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-21 18:14:37.997147 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-21 18:14:37.997284 I | etcdserver: published {Name:ingress-addon-legacy-310121 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-21 18:14:37.997492 I | embed: ready to serve client requests
	2023-12-21 18:14:37.998994 I | embed: serving client requests on 192.168.49.2:2379
	2023-12-21 18:14:37.999205 I | embed: ready to serve client requests
	2023-12-21 18:14:38.000473 I | embed: serving client requests on 127.0.0.1:2379
	
	
	==> kernel <==
	 18:16:37 up 59 min,  0 users,  load average: 1.71, 2.15, 1.42
	Linux ingress-addon-legacy-310121 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [0b181d95b2e0] <==
	I1221 18:14:41.943615       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E1221 18:14:41.947227       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1221 18:14:42.020256       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1221 18:14:42.026414       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1221 18:14:42.028366       1 cache.go:39] Caches are synced for autoregister controller
	I1221 18:14:42.028747       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1221 18:14:42.045253       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1221 18:14:42.817323       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1221 18:14:42.817358       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1221 18:14:42.834994       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1221 18:14:42.845265       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1221 18:14:42.845467       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1221 18:14:43.259812       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 18:14:43.298752       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1221 18:14:43.444151       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1221 18:14:43.445392       1 controller.go:609] quota admission added evaluator for: endpoints
	I1221 18:14:43.449485       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 18:14:44.258298       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1221 18:14:45.090131       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1221 18:14:45.193421       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1221 18:14:48.847374       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 18:15:00.742473       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1221 18:15:00.853204       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1221 18:15:35.970964       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1221 18:15:59.361639       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [deaf7eeed95c] <==
	I1221 18:15:00.903036       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"2bdea502-90b7-410c-abba-5433b3261e6a", APIVersion:"apps/v1", ResourceVersion:"201", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-mhzzm
	I1221 18:15:00.927279       1 shared_informer.go:230] Caches are synced for endpoint 
	E1221 18:15:00.992602       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"2bdea502-90b7-410c-abba-5433b3261e6a", ResourceVersion:"201", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63838779285, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000c9e740), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4000c9e760)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000c9e780), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001124d40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4000c9e7a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000c9e7c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000c9e800)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001065a90), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40006be078), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40003b7ea0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40007c9088)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40006be0c8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1221 18:15:01.029482       1 shared_informer.go:230] Caches are synced for expand 
	I1221 18:15:01.029579       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1221 18:15:01.029755       1 shared_informer.go:230] Caches are synced for PV protection 
	I1221 18:15:01.095270       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I1221 18:15:01.124261       1 shared_informer.go:230] Caches are synced for attach detach 
	I1221 18:15:01.145888       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"34cfcce7-d867-403a-8251-972c9af82e04", APIVersion:"apps/v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1221 18:15:01.174628       1 shared_informer.go:230] Caches are synced for disruption 
	I1221 18:15:01.174653       1 disruption.go:339] Sending events to api server.
	I1221 18:15:01.201658       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"d34888c9-24eb-4786-89b5-1d82c08a9c01", APIVersion:"apps/v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-fk8jh
	I1221 18:15:01.239204       1 shared_informer.go:230] Caches are synced for resource quota 
	I1221 18:15:01.271018       1 shared_informer.go:230] Caches are synced for resource quota 
	I1221 18:15:01.272600       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1221 18:15:01.272620       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1221 18:15:01.328339       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1221 18:15:35.954053       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"92afa5db-9e24-4107-884f-e4291ed911c3", APIVersion:"apps/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1221 18:15:35.974238       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"931e81fa-2424-45ca-8e74-f6032bae3578", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-htv96
	I1221 18:15:36.018088       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"728909cf-92a0-48a8-b7bd-86890576fbda", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-9wrh5
	I1221 18:15:36.079359       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"24ed8d5d-da90-4737-bd37-949c4b305110", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-swsll
	I1221 18:15:38.325594       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"728909cf-92a0-48a8-b7bd-86890576fbda", APIVersion:"batch/v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1221 18:15:39.360861       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"24ed8d5d-da90-4737-bd37-949c4b305110", APIVersion:"batch/v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1221 18:16:09.180529       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"f7f07f3d-f5c5-4725-8f7a-3960415beb80", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1221 18:16:09.197984       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"61a720d6-eccd-44ea-808f-9866b5a6329f", APIVersion:"apps/v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-x9zv9
	
	
	==> kube-proxy [6f03d2a51100] <==
	W1221 18:15:02.007808       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1221 18:15:02.043767       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1221 18:15:02.043806       1 server_others.go:186] Using iptables Proxier.
	I1221 18:15:02.044150       1 server.go:583] Version: v1.18.20
	I1221 18:15:02.045254       1 config.go:315] Starting service config controller
	I1221 18:15:02.045296       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1221 18:15:02.045391       1 config.go:133] Starting endpoints config controller
	I1221 18:15:02.045395       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1221 18:15:02.159947       1 shared_informer.go:230] Caches are synced for service config 
	I1221 18:15:02.160082       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [e208e55d270f] <==
	W1221 18:14:41.990983       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1221 18:14:42.013478       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1221 18:14:42.013674       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1221 18:14:42.016133       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1221 18:14:42.016502       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1221 18:14:42.016630       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1221 18:14:42.016737       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1221 18:14:42.022798       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1221 18:14:42.026201       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1221 18:14:42.029283       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1221 18:14:42.029598       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1221 18:14:42.030380       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1221 18:14:42.032358       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1221 18:14:42.032807       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1221 18:14:42.033657       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1221 18:14:42.033916       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1221 18:14:42.034244       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1221 18:14:42.034434       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1221 18:14:42.034533       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1221 18:14:43.049579       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1221 18:14:43.070373       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1221 18:14:43.090536       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1221 18:14:43.100255       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1221 18:14:45.816835       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1221 18:15:00.883419       1 factory.go:503] pod: kube-system/coredns-66bff467f8-fk8jh is already present in unschedulable queue
	
	
	==> kubelet <==
	Dec 21 18:16:14 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:14.825899    2878 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 11fed94a21d75a5aa7cb5277bc8781cab2a05e1a3c8cef98b8c60f5b9fc439e5
	Dec 21 18:16:14 ingress-addon-legacy-310121 kubelet[2878]: E1221 18:16:14.826130    2878 pod_workers.go:191] Error syncing pod 66325b60-c4f9-4e3a-874c-2d1b332bc88f ("hello-world-app-5f5d8b66bb-x9zv9_default(66325b60-c4f9-4e3a-874c-2d1b332bc88f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-x9zv9_default(66325b60-c4f9-4e3a-874c-2d1b332bc88f)"
	Dec 21 18:16:21 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:21.976420    2878 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c14b55cde22fee8f16465e208aaf9fb7d0a1cf0f9e4b586a5dfd0828e35bb595
	Dec 21 18:16:21 ingress-addon-legacy-310121 kubelet[2878]: E1221 18:16:21.977210    2878 pod_workers.go:191] Error syncing pod 70e1f719-1dbf-4de7-ae5b-8bc2853da370 ("kube-ingress-dns-minikube_kube-system(70e1f719-1dbf-4de7-ae5b-8bc2853da370)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(70e1f719-1dbf-4de7-ae5b-8bc2853da370)"
	Dec 21 18:16:25 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:25.128974    2878 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-75hts" (UniqueName: "kubernetes.io/secret/70e1f719-1dbf-4de7-ae5b-8bc2853da370-minikube-ingress-dns-token-75hts") pod "70e1f719-1dbf-4de7-ae5b-8bc2853da370" (UID: "70e1f719-1dbf-4de7-ae5b-8bc2853da370")
	Dec 21 18:16:25 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:25.135571    2878 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e1f719-1dbf-4de7-ae5b-8bc2853da370-minikube-ingress-dns-token-75hts" (OuterVolumeSpecName: "minikube-ingress-dns-token-75hts") pod "70e1f719-1dbf-4de7-ae5b-8bc2853da370" (UID: "70e1f719-1dbf-4de7-ae5b-8bc2853da370"). InnerVolumeSpecName "minikube-ingress-dns-token-75hts". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 21 18:16:25 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:25.229359    2878 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-75hts" (UniqueName: "kubernetes.io/secret/70e1f719-1dbf-4de7-ae5b-8bc2853da370-minikube-ingress-dns-token-75hts") on node "ingress-addon-legacy-310121" DevicePath ""
	Dec 21 18:16:27 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:27.917965    2878 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c14b55cde22fee8f16465e208aaf9fb7d0a1cf0f9e4b586a5dfd0828e35bb595
	Dec 21 18:16:27 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:27.976437    2878 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 11fed94a21d75a5aa7cb5277bc8781cab2a05e1a3c8cef98b8c60f5b9fc439e5
	Dec 21 18:16:28 ingress-addon-legacy-310121 kubelet[2878]: W1221 18:16:28.119527    2878 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod66325b60-c4f9-4e3a-874c-2d1b332bc88f/abf8bfa1f2db8cb9feecbe088a508eacf20a747e51dad86f51b734934f26250a": none of the resources are being tracked.
	Dec 21 18:16:28 ingress-addon-legacy-310121 kubelet[2878]: W1221 18:16:28.927159    2878 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-x9zv9 through plugin: invalid network status for
	Dec 21 18:16:28 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:28.933669    2878 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 11fed94a21d75a5aa7cb5277bc8781cab2a05e1a3c8cef98b8c60f5b9fc439e5
	Dec 21 18:16:28 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:28.933993    2878 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: abf8bfa1f2db8cb9feecbe088a508eacf20a747e51dad86f51b734934f26250a
	Dec 21 18:16:28 ingress-addon-legacy-310121 kubelet[2878]: E1221 18:16:28.934251    2878 pod_workers.go:191] Error syncing pod 66325b60-c4f9-4e3a-874c-2d1b332bc88f ("hello-world-app-5f5d8b66bb-x9zv9_default(66325b60-c4f9-4e3a-874c-2d1b332bc88f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-x9zv9_default(66325b60-c4f9-4e3a-874c-2d1b332bc88f)"
	Dec 21 18:16:29 ingress-addon-legacy-310121 kubelet[2878]: E1221 18:16:29.849544    2878 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-htv96.17a2eb8aa777892b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-htv96", UID:"630a0219-05ea-43c0-b2ac-8d390b5f7cc6", APIVersion:"v1", ResourceVersion:"474", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-310121"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1593e1f727de72b, ext:104818998645, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1593e1f727de72b, ext:104818998645, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-htv96.17a2eb8aa777892b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 21 18:16:29 ingress-addon-legacy-310121 kubelet[2878]: E1221 18:16:29.882103    2878 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-htv96.17a2eb8aa777892b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-htv96", UID:"630a0219-05ea-43c0-b2ac-8d390b5f7cc6", APIVersion:"v1", ResourceVersion:"474", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-310121"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1593e1f727de72b, ext:104818998645, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1593e1f739a9a36, ext:104837656704, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-htv96.17a2eb8aa777892b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 21 18:16:29 ingress-addon-legacy-310121 kubelet[2878]: W1221 18:16:29.960002    2878 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-x9zv9 through plugin: invalid network status for
	Dec 21 18:16:32 ingress-addon-legacy-310121 kubelet[2878]: W1221 18:16:32.990577    2878 pod_container_deletor.go:77] Container "feb6f4d2f4ea57b9cabe4e9a3248faad9a72f1a675c2c6497eeca0a21e187179" not found in pod's containers
	Dec 21 18:16:33 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:33.950794    2878 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-7v4gv" (UniqueName: "kubernetes.io/secret/630a0219-05ea-43c0-b2ac-8d390b5f7cc6-ingress-nginx-token-7v4gv") pod "630a0219-05ea-43c0-b2ac-8d390b5f7cc6" (UID: "630a0219-05ea-43c0-b2ac-8d390b5f7cc6")
	Dec 21 18:16:33 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:33.950855    2878 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/630a0219-05ea-43c0-b2ac-8d390b5f7cc6-webhook-cert") pod "630a0219-05ea-43c0-b2ac-8d390b5f7cc6" (UID: "630a0219-05ea-43c0-b2ac-8d390b5f7cc6")
	Dec 21 18:16:33 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:33.956895    2878 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630a0219-05ea-43c0-b2ac-8d390b5f7cc6-ingress-nginx-token-7v4gv" (OuterVolumeSpecName: "ingress-nginx-token-7v4gv") pod "630a0219-05ea-43c0-b2ac-8d390b5f7cc6" (UID: "630a0219-05ea-43c0-b2ac-8d390b5f7cc6"). InnerVolumeSpecName "ingress-nginx-token-7v4gv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 21 18:16:33 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:33.958816    2878 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630a0219-05ea-43c0-b2ac-8d390b5f7cc6-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "630a0219-05ea-43c0-b2ac-8d390b5f7cc6" (UID: "630a0219-05ea-43c0-b2ac-8d390b5f7cc6"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 21 18:16:34 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:34.051109    2878 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/630a0219-05ea-43c0-b2ac-8d390b5f7cc6-webhook-cert") on node "ingress-addon-legacy-310121" DevicePath ""
	Dec 21 18:16:34 ingress-addon-legacy-310121 kubelet[2878]: I1221 18:16:34.051193    2878 reconciler.go:319] Volume detached for volume "ingress-nginx-token-7v4gv" (UniqueName: "kubernetes.io/secret/630a0219-05ea-43c0-b2ac-8d390b5f7cc6-ingress-nginx-token-7v4gv") on node "ingress-addon-legacy-310121" DevicePath ""
	Dec 21 18:16:34 ingress-addon-legacy-310121 kubelet[2878]: W1221 18:16:34.989695    2878 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/630a0219-05ea-43c0-b2ac-8d390b5f7cc6/volumes" does not exist
	
	
	==> storage-provisioner [e89ed4a37aa8] <==
	I1221 18:15:04.370228       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 18:15:04.389346       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 18:15:04.389522       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1221 18:15:04.396553       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 18:15:04.397585       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-310121_5a2a0909-b9f1-4216-830b-cb8df4a5237c!
	I1221 18:15:04.400209       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7d153ad8-cde1-49b7-99a2-016e372b2a8b", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-310121_5a2a0909-b9f1-4216-830b-cb8df4a5237c became leader
	I1221 18:15:04.498767       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-310121_5a2a0909-b9f1-4216-830b-cb8df4a5237c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-310121 -n ingress-addon-legacy-310121
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-310121 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (51.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (414.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.594955100.exe start -p stopped-upgrade-203518 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1221 18:42:03.469015    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:42:06.676004    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.594955100.exe start -p stopped-upgrade-203518 --memory=2200 --vm-driver=docker  --container-runtime=docker: exit status 80 (16.746069673s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-203518] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig1460215855
	* Using the docker driver based on user configuration
	* Starting control plane node stopped-upgrade-203518 in cluster stopped-upgrade-203518
	* Downloading Kubernetes v1.20.2 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 6.91 MiB / 514.92 MiB [>__] 1.34% ? p/s ?    > preloaded-images-k8s-v8-v1....: 16.28 MiB / 514.92 MiB [>_] 3.16% ? p/s ?    > preloaded-images-k8s-v8-v1....: 24.86 MiB / 514.92 MiB [>_] 4.83% ? p/s ?    > preloaded-images-k8s-v8-v1....: 37.26 MiB / 514.92 MiB  7.24% 50.59 MiB p    > preloaded-images-k8s-v8-v1....: 49.06 MiB / 514.92 MiB  9.53% 50.59 MiB p    > preloaded-images-k8s-v8-v1....: 56.48 MiB / 514.92 MiB  10.97% 50.59 MiB     > preloaded-images-k8s-v8-v1....: 65.56 MiB / 514.92 MiB  12.73% 50.37 MiB     > preloaded-images-k8s-v8-v1....: 79.07 MiB / 514.92 MiB  15.36% 50.37 MiB     > preloaded-images-k8s-v8-v1....: 88.00 MiB / 514.92 MiB  17.09% 50.37 MiB     > preloaded-images-k8s-v8-v1....: 96.78 MiB / 514.92 MiB  18.80% 50.48 MiB     > preloaded-images-k8s-v8-v1....: 109.82 MiB / 514.92 MiB  21.33% 50.48 MiB    > preloaded-images-k8s-v8-v1....: 118.72 MiB / 514.92 MiB  23.06% 50.48 MiB    > preloaded-images-k8s-v8-v1....: 130.52 MiB / 514.92 MiB  25.3
5% 50.85 MiB    > preloaded-images-k8s-v8-v1....: 141.87 MiB / 514.92 MiB  27.55% 50.85 MiB    > preloaded-images-k8s-v8-v1....: 152.45 MiB / 514.92 MiB  29.61% 50.85 MiB    > preloaded-images-k8s-v8-v1....: 163.24 MiB / 514.92 MiB  31.70% 51.09 MiB    > preloaded-images-k8s-v8-v1....: 175.01 MiB / 514.92 MiB  33.99% 51.09 MiB    > preloaded-images-k8s-v8-v1....: 186.17 MiB / 514.92 MiB  36.16% 51.09 MiB    > preloaded-images-k8s-v8-v1....: 200.00 MiB / 514.92 MiB  38.84% 51.75 MiB    > preloaded-images-k8s-v8-v1....: 210.98 MiB / 514.92 MiB  40.97% 51.75 MiB    > preloaded-images-k8s-v8-v1....: 218.89 MiB / 514.92 MiB  42.51% 51.75 MiB    > preloaded-images-k8s-v8-v1....: 224.53 MiB / 514.92 MiB  43.60% 51.04 MiB    > preloaded-images-k8s-v8-v1....: 236.04 MiB / 514.92 MiB  45.84% 51.04 MiB    > preloaded-images-k8s-v8-v1....: 248.00 MiB / 514.92 MiB  48.16% 51.04 MiB    > preloaded-images-k8s-v8-v1....: 259.12 MiB / 514.92 MiB  50.32% 51.47 MiB    > preloaded-images-k8s-v8-v1....: 267.45 MiB / 514.92 MiB  5
1.94% 51.47 MiB    > preloaded-images-k8s-v8-v1....: 278.86 MiB / 514.92 MiB  54.16% 51.47 MiB    > preloaded-images-k8s-v8-v1....: 290.48 MiB / 514.92 MiB  56.41% 51.52 MiB    > preloaded-images-k8s-v8-v1....: 302.28 MiB / 514.92 MiB  58.70% 51.52 MiB    > preloaded-images-k8s-v8-v1....: 313.06 MiB / 514.92 MiB  60.80% 51.52 MiB    > preloaded-images-k8s-v8-v1....: 323.96 MiB / 514.92 MiB  62.91% 51.80 MiB    > preloaded-images-k8s-v8-v1....: 334.84 MiB / 514.92 MiB  65.03% 51.80 MiB    > preloaded-images-k8s-v8-v1....: 346.50 MiB / 514.92 MiB  67.29% 51.80 MiB    > preloaded-images-k8s-v8-v1....: 358.45 MiB / 514.92 MiB  69.61% 52.17 MiB    > preloaded-images-k8s-v8-v1....: 369.77 MiB / 514.92 MiB  71.81% 52.17 MiB    > preloaded-images-k8s-v8-v1....: 383.16 MiB / 514.92 MiB  74.41% 52.17 MiB    > preloaded-images-k8s-v8-v1....: 394.84 MiB / 514.92 MiB  76.68% 52.71 MiB    > preloaded-images-k8s-v8-v1....: 409.86 MiB / 514.92 MiB  79.60% 52.71 MiB    > preloaded-images-k8s-v8-v1....: 421.83 MiB / 514.92 MiB
81.92% 52.71 MiB    > preloaded-images-k8s-v8-v1....: 436.48 MiB / 514.92 MiB  84.77% 53.79 MiB    > preloaded-images-k8s-v8-v1....: 449.29 MiB / 514.92 MiB  87.25% 53.79 MiB    > preloaded-images-k8s-v8-v1....: 461.07 MiB / 514.92 MiB  89.54% 53.79 MiB    > preloaded-images-k8s-v8-v1....: 474.42 MiB / 514.92 MiB  92.13% 54.40 MiB    > preloaded-images-k8s-v8-v1....: 487.17 MiB / 514.92 MiB  94.61% 54.40 MiB    > preloaded-images-k8s-v8-v1....: 501.64 MiB / 514.92 MiB  97.42% 54.40 MiB    > preloaded-images-k8s-v8-v1....: 514.92 MiB / 514.92 MiB  100.00% 57.50 MiX Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.594955100.exe start -p stopped-upgrade-203518 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1221 18:42:27.156825    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 18:43:08.117853    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.594955100.exe start -p stopped-upgrade-203518 --memory=2200 --vm-driver=docker  --container-runtime=docker: exit status 80 (3m12.426702269s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-203518] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig1829719454
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-203518 in cluster stopped-upgrade-203518
	* docker "stopped-upgrade-203518" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.594955100.exe start -p stopped-upgrade-203518 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.594955100.exe start -p stopped-upgrade-203518 --memory=2200 --vm-driver=docker  --container-runtime=docker: exit status 80 (3m22.94610722s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-203518] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig3088338585
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-203518 in cluster stopped-upgrade-203518
	* Downloading Kubernetes v1.20.2 preload ...
	* docker "stopped-upgrade-203518" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 6.55 MiB / 514.92 MiB [>__] 1.27% ? p/s ?    > preloaded-images-k8s-v8-v1....: 16.00 MiB / 514.92 MiB [>_] 3.11% ? p/s ?    > preloaded-images-k8s-v8-v1....: 24.00 MiB / 514.92 MiB [>_] 4.66% ? p/s ?    > preloaded-images-k8s-v8-v1....: 32.80 MiB / 514.92 MiB  6.37% 43.74 MiB p    > preloaded-images-k8s-v8-v1....: 43.93 MiB / 514.92 MiB  8.53% 43.74 MiB p    > preloaded-images-k8s-v8-v1....: 52.63 MiB / 514.92 MiB  10.22% 43.74 MiB     > preloaded-images-k8s-v8-v1....: 62.01 MiB / 514.92 MiB  12.04% 44.06 MiB     > preloaded-images-k8s-v8-v1....: 69.99 MiB / 514.92 MiB  13.59% 44.06 MiB     > preloaded-images-k8s-v8-v1....: 80.00 MiB / 514.92 MiB  15.54% 44.06 MiB     > preloaded-images-k8s-v8-v1....: 85.59 MiB / 514.92 MiB  16.62% 43.75 MiB     > preloaded-images-k8s-v8-v1....: 92.27 MiB / 514.92 MiB  17.92% 43.75 MiB     > preloaded-images-k8s-v8-v1....: 101.86 MiB / 514.92 MiB  19.78% 43.75 MiB    > preloaded-images-k8s-v8-v1....: 111.77 MiB / 514.92 MiB  21.7
1% 43.75 MiB    > preloaded-images-k8s-v8-v1....: 117.94 MiB / 514.92 MiB  22.90% 43.75 MiB    > preloaded-images-k8s-v8-v1....: 126.08 MiB / 514.92 MiB  24.49% 43.75 MiB    > preloaded-images-k8s-v8-v1....: 134.70 MiB / 514.92 MiB  26.16% 43.38 MiB    > preloaded-images-k8s-v8-v1....: 144.00 MiB / 514.92 MiB  27.97% 43.38 MiB    > preloaded-images-k8s-v8-v1....: 152.12 MiB / 514.92 MiB  29.54% 43.38 MiB    > preloaded-images-k8s-v8-v1....: 160.11 MiB / 514.92 MiB  31.09% 43.32 MiB    > preloaded-images-k8s-v8-v1....: 168.08 MiB / 514.92 MiB  32.64% 43.32 MiB    > preloaded-images-k8s-v8-v1....: 176.06 MiB / 514.92 MiB  34.19% 43.32 MiB    > preloaded-images-k8s-v8-v1....: 182.95 MiB / 514.92 MiB  35.53% 42.98 MiB    > preloaded-images-k8s-v8-v1....: 195.27 MiB / 514.92 MiB  37.92% 42.98 MiB    > preloaded-images-k8s-v8-v1....: 202.28 MiB / 514.92 MiB  39.28% 42.98 MiB    > preloaded-images-k8s-v8-v1....: 215.26 MiB / 514.92 MiB  41.80% 43.69 MiB    > preloaded-images-k8s-v8-v1....: 224.06 MiB / 514.92 MiB  4
3.51% 43.69 MiB    > preloaded-images-k8s-v8-v1....: 233.98 MiB / 514.92 MiB  45.44% 43.69 MiB    > preloaded-images-k8s-v8-v1....: 247.60 MiB / 514.92 MiB  48.09% 44.35 MiB    > preloaded-images-k8s-v8-v1....: 260.20 MiB / 514.92 MiB  50.53% 44.35 MiB    > preloaded-images-k8s-v8-v1....: 269.36 MiB / 514.92 MiB  52.31% 44.35 MiB    > preloaded-images-k8s-v8-v1....: 278.78 MiB / 514.92 MiB  54.14% 44.84 MiB    > preloaded-images-k8s-v8-v1....: 288.45 MiB / 514.92 MiB  56.02% 44.84 MiB    > preloaded-images-k8s-v8-v1....: 298.17 MiB / 514.92 MiB  57.91% 44.84 MiB    > preloaded-images-k8s-v8-v1....: 312.28 MiB / 514.92 MiB  60.65% 45.55 MiB    > preloaded-images-k8s-v8-v1....: 322.85 MiB / 514.92 MiB  62.70% 45.55 MiB    > preloaded-images-k8s-v8-v1....: 332.65 MiB / 514.92 MiB  64.60% 45.55 MiB    > preloaded-images-k8s-v8-v1....: 343.67 MiB / 514.92 MiB  66.74% 45.98 MiB    > preloaded-images-k8s-v8-v1....: 352.57 MiB / 514.92 MiB  68.47% 45.98 MiB    > preloaded-images-k8s-v8-v1....: 361.61 MiB / 514.92 MiB
70.23% 45.98 MiB    > preloaded-images-k8s-v8-v1....: 374.33 MiB / 514.92 MiB  72.70% 46.31 MiB    > preloaded-images-k8s-v8-v1....: 386.48 MiB / 514.92 MiB  75.06% 46.31 MiB    > preloaded-images-k8s-v8-v1....: 398.78 MiB / 514.92 MiB  77.44% 46.31 MiB    > preloaded-images-k8s-v8-v1....: 409.09 MiB / 514.92 MiB  79.45% 47.06 MiB    > preloaded-images-k8s-v8-v1....: 421.45 MiB / 514.92 MiB  81.85% 47.06 MiB    > preloaded-images-k8s-v8-v1....: 434.38 MiB / 514.92 MiB  84.36% 47.06 MiB    > preloaded-images-k8s-v8-v1....: 446.91 MiB / 514.92 MiB  86.79% 48.09 MiB    > preloaded-images-k8s-v8-v1....: 459.15 MiB / 514.92 MiB  89.17% 48.09 MiB    > preloaded-images-k8s-v8-v1....: 472.44 MiB / 514.92 MiB  91.75% 48.09 MiB    > preloaded-images-k8s-v8-v1....: 483.13 MiB / 514.92 MiB  93.83% 48.89 MiB    > preloaded-images-k8s-v8-v1....: 495.40 MiB / 514.92 MiB  96.21% 48.89 MiB    > preloaded-images-k8s-v8-v1....: 506.99 MiB / 514.92 MiB  98.46% 48.89 MiB    > preloaded-images-k8s-v8-v1....: 514.92 MiB / 514.92
MiB  100.00% 50.93 MiX Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:202: legacy v1.17.0 start failed: exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (414.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-203518
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p stopped-upgrade-203518: exit status 85 (144.595312ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-555316                         | kubernetes-upgrade-555316 | jenkins | v1.32.0 | 21 Dec 23 18:40 UTC | 21 Dec 23 18:40 UTC |
	| start   | -p kubernetes-upgrade-555316                         | kubernetes-upgrade-555316 | jenkins | v1.32.0 | 21 Dec 23 18:40 UTC | 21 Dec 23 18:41 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                    |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-464187                            | missing-upgrade-464187    | jenkins | v1.32.0 | 21 Dec 23 18:41 UTC | 21 Dec 23 18:42 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-555316                         | kubernetes-upgrade-555316 | jenkins | v1.32.0 | 21 Dec 23 18:41 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-555316                         | kubernetes-upgrade-555316 | jenkins | v1.32.0 | 21 Dec 23 18:41 UTC | 21 Dec 23 18:41 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                    |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-555316                         | kubernetes-upgrade-555316 | jenkins | v1.32.0 | 21 Dec 23 18:41 UTC | 21 Dec 23 18:42 UTC |
	| delete  | -p missing-upgrade-464187                            | missing-upgrade-464187    | jenkins | v1.32.0 | 21 Dec 23 18:42 UTC | 21 Dec 23 18:42 UTC |
	| start   | -p running-upgrade-569424                            | running-upgrade-569424    | jenkins | v1.32.0 | 21 Dec 23 18:43 UTC | 21 Dec 23 18:43 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-569424                            | running-upgrade-569424    | jenkins | v1.32.0 | 21 Dec 23 18:43 UTC | 21 Dec 23 18:43 UTC |
	| start   | -p force-systemd-flag-143445                         | force-systemd-flag-143445 | jenkins | v1.32.0 | 21 Dec 23 18:43 UTC | 21 Dec 23 18:44 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-143445                            | force-systemd-flag-143445 | jenkins | v1.32.0 | 21 Dec 23 18:44 UTC | 21 Dec 23 18:44 UTC |
	|         | ssh docker info --format                             |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-143445                         | force-systemd-flag-143445 | jenkins | v1.32.0 | 21 Dec 23 18:44 UTC | 21 Dec 23 18:44 UTC |
	| start   | -p NoKubernetes-039433                               | NoKubernetes-039433       | jenkins | v1.32.0 | 21 Dec 23 18:44 UTC |                     |
	|         | --no-kubernetes                                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20                            |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-039433                               | NoKubernetes-039433       | jenkins | v1.32.0 | 21 Dec 23 18:44 UTC | 21 Dec 23 18:44 UTC |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-039433                               | NoKubernetes-039433       | jenkins | v1.32.0 | 21 Dec 23 18:44 UTC | 21 Dec 23 18:45 UTC |
	|         | --no-kubernetes                                      |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-039433                               | NoKubernetes-039433       | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC | 21 Dec 23 18:45 UTC |
	| start   | -p NoKubernetes-039433                               | NoKubernetes-039433       | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC | 21 Dec 23 18:45 UTC |
	|         | --no-kubernetes                                      |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-039433 sudo                          | NoKubernetes-039433       | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-039433                               | NoKubernetes-039433       | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC | 21 Dec 23 18:45 UTC |
	| start   | -p NoKubernetes-039433                               | NoKubernetes-039433       | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC | 21 Dec 23 18:45 UTC |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-039433 sudo                          | NoKubernetes-039433       | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-039433                               | NoKubernetes-039433       | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC | 21 Dec 23 18:45 UTC |
	| ssh     | -p cilium-129117 sudo cat                            | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo cat                            | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo cat                            | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo crictl                         | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo crictl                         | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | ps --all                                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo find                           | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo ip a s                         | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	| ssh     | -p cilium-129117 sudo ip r s                         | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | iptables-save                                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo iptables                       | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | -t nat -L -n -v                                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo cat                            | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo cat                            | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo cat                            | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo docker                         | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo cat                            | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo cat                            | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo cat                            | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo cat                            | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo                                | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo find                           | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-129117 sudo crio                           | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-129117                                     | cilium-129117             | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC | 21 Dec 23 18:45 UTC |
	| start   | -p force-systemd-env-145625                          | force-systemd-env-145625  | jenkins | v1.32.0 | 21 Dec 23 18:45 UTC | 21 Dec 23 18:46 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-145625                             | force-systemd-env-145625  | jenkins | v1.32.0 | 21 Dec 23 18:46 UTC | 21 Dec 23 18:46 UTC |
	|         | ssh docker info --format                             |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-145625                          | force-systemd-env-145625  | jenkins | v1.32.0 | 21 Dec 23 18:46 UTC | 21 Dec 23 18:46 UTC |
	| start   | -p cert-expiration-123629                            | cert-expiration-123629    | jenkins | v1.32.0 | 21 Dec 23 18:46 UTC | 21 Dec 23 18:46 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/21 18:46:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 18:46:21.190934  225945 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:46:21.191075  225945 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:46:21.191078  225945 out.go:309] Setting ErrFile to fd 2...
	I1221 18:46:21.191083  225945 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:46:21.191327  225945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
	I1221 18:46:21.191731  225945 out.go:303] Setting JSON to false
	I1221 18:46:21.192540  225945 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5329,"bootTime":1703179053,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1221 18:46:21.192601  225945 start.go:138] virtualization:  
	I1221 18:46:21.195353  225945 out.go:177] * [cert-expiration-123629] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1221 18:46:21.197408  225945 out.go:177]   - MINIKUBE_LOCATION=17848
	I1221 18:46:21.199081  225945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:46:21.197511  225945 notify.go:220] Checking for updates...
	I1221 18:46:21.203169  225945 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig
	I1221 18:46:21.204931  225945 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	I1221 18:46:21.206974  225945 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1221 18:46:21.208501  225945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 18:46:21.210828  225945 config.go:182] Loaded profile config "stopped-upgrade-203518": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.2
	I1221 18:46:21.210925  225945 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:46:21.234453  225945 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:46:21.234567  225945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:46:21.316252  225945 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-21 18:46:21.306445901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:46:21.316354  225945 docker.go:295] overlay module found
	I1221 18:46:21.318236  225945 out.go:177] * Using the docker driver based on user configuration
	I1221 18:46:21.320215  225945 start.go:298] selected driver: docker
	I1221 18:46:21.320222  225945 start.go:902] validating driver "docker" against <nil>
	I1221 18:46:21.320241  225945 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 18:46:21.320834  225945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:46:21.391913  225945 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-21 18:46:21.382326066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:46:21.392048  225945 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1221 18:46:21.392290  225945 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1221 18:46:21.394387  225945 out.go:177] * Using Docker driver with root privileges
	I1221 18:46:21.396246  225945 cni.go:84] Creating CNI manager for ""
	I1221 18:46:21.396263  225945 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1221 18:46:21.396274  225945 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1221 18:46:21.396283  225945 start_flags.go:323] config:
	{Name:cert-expiration-123629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-123629 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:46:21.399419  225945 out.go:177] * Starting control plane node cert-expiration-123629 in cluster cert-expiration-123629
	I1221 18:46:21.401237  225945 cache.go:121] Beginning downloading kic base image for docker with docker
	I1221 18:46:21.402950  225945 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:46:21.404624  225945 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1221 18:46:21.404658  225945 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1221 18:46:21.404664  225945 cache.go:56] Caching tarball of preloaded images
	I1221 18:46:21.404673  225945 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:46:21.404746  225945 preload.go:174] Found /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1221 18:46:21.404754  225945 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1221 18:46:21.404855  225945 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/config.json ...
	I1221 18:46:21.404869  225945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/config.json: {Name:mk6437d665cea7f45d34473b82af39209fa09308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:46:21.423039  225945 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon, skipping pull
	I1221 18:46:21.423052  225945 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in daemon, skipping load
	I1221 18:46:21.423068  225945 cache.go:194] Successfully downloaded all kic artifacts
	I1221 18:46:21.423107  225945 start.go:365] acquiring machines lock for cert-expiration-123629: {Name:mkfc3c9ce6f13e2baa778cba376a0cbe80a685e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:46:21.423682  225945 start.go:369] acquired machines lock for "cert-expiration-123629" in 557.921µs
	I1221 18:46:21.423709  225945 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-123629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-123629 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1221 18:46:21.423782  225945 start.go:125] createHost starting for "" (driver="docker")
	I1221 18:46:21.426164  225945 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1221 18:46:21.426397  225945 start.go:159] libmachine.API.Create for "cert-expiration-123629" (driver="docker")
	I1221 18:46:21.426420  225945 client.go:168] LocalClient.Create starting
	I1221 18:46:21.426492  225945 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem
	I1221 18:46:21.426522  225945 main.go:141] libmachine: Decoding PEM data...
	I1221 18:46:21.426535  225945 main.go:141] libmachine: Parsing certificate...
	I1221 18:46:21.426582  225945 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17848-2360/.minikube/certs/cert.pem
	I1221 18:46:21.426597  225945 main.go:141] libmachine: Decoding PEM data...
	I1221 18:46:21.426607  225945 main.go:141] libmachine: Parsing certificate...
	I1221 18:46:21.426938  225945 cli_runner.go:164] Run: docker network inspect cert-expiration-123629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1221 18:46:21.444795  225945 cli_runner.go:211] docker network inspect cert-expiration-123629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1221 18:46:21.444857  225945 network_create.go:281] running [docker network inspect cert-expiration-123629] to gather additional debugging logs...
	I1221 18:46:21.444871  225945 cli_runner.go:164] Run: docker network inspect cert-expiration-123629
	W1221 18:46:21.461969  225945 cli_runner.go:211] docker network inspect cert-expiration-123629 returned with exit code 1
	I1221 18:46:21.462001  225945 network_create.go:284] error running [docker network inspect cert-expiration-123629]: docker network inspect cert-expiration-123629: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-123629 not found
	I1221 18:46:21.462011  225945 network_create.go:286] output of [docker network inspect cert-expiration-123629]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-123629 not found
	
	** /stderr **
	I1221 18:46:21.462103  225945 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:46:21.481218  225945 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1896bac3aebd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:39:dd:aa:0f} reservation:<nil>}
	I1221 18:46:21.481539  225945 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf9b5ee44688 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:98:73:f8:4f} reservation:<nil>}
	I1221 18:46:21.481975  225945 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025d6a50}
	I1221 18:46:21.481993  225945 network_create.go:124] attempt to create docker network cert-expiration-123629 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1221 18:46:21.482055  225945 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-123629 cert-expiration-123629
	I1221 18:46:21.551800  225945 network_create.go:108] docker network cert-expiration-123629 192.168.67.0/24 created
	I1221 18:46:21.551822  225945 kic.go:121] calculated static IP "192.168.67.2" for the "cert-expiration-123629" container
	I1221 18:46:21.551887  225945 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1221 18:46:21.569076  225945 cli_runner.go:164] Run: docker volume create cert-expiration-123629 --label name.minikube.sigs.k8s.io=cert-expiration-123629 --label created_by.minikube.sigs.k8s.io=true
	I1221 18:46:21.586800  225945 oci.go:103] Successfully created a docker volume cert-expiration-123629
	I1221 18:46:21.586878  225945 cli_runner.go:164] Run: docker run --rm --name cert-expiration-123629-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-123629 --entrypoint /usr/bin/test -v cert-expiration-123629:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1221 18:46:22.130416  225945 oci.go:107] Successfully prepared a docker volume cert-expiration-123629
	I1221 18:46:22.130464  225945 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1221 18:46:22.130481  225945 kic.go:194] Starting extracting preloaded images to volume ...
	I1221 18:46:22.130554  225945 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-123629:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1221 18:46:26.193709  225945 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-123629:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.063105697s)
	I1221 18:46:26.195925  225945 kic.go:203] duration metric: took 4.065435 seconds to extract preloaded images to volume
	W1221 18:46:26.196067  225945 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1221 18:46:26.196166  225945 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1221 18:46:26.259313  225945 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-123629 --name cert-expiration-123629 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-123629 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-123629 --network cert-expiration-123629 --ip 192.168.67.2 --volume cert-expiration-123629:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1221 18:46:26.580775  225945 cli_runner.go:164] Run: docker container inspect cert-expiration-123629 --format={{.State.Running}}
	I1221 18:46:26.615205  225945 cli_runner.go:164] Run: docker container inspect cert-expiration-123629 --format={{.State.Status}}
	I1221 18:46:26.645881  225945 cli_runner.go:164] Run: docker exec cert-expiration-123629 stat /var/lib/dpkg/alternatives/iptables
	I1221 18:46:26.708504  225945 oci.go:144] the created container "cert-expiration-123629" has a running status.
	I1221 18:46:26.708521  225945 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17848-2360/.minikube/machines/cert-expiration-123629/id_rsa...
	I1221 18:46:27.089050  225945 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17848-2360/.minikube/machines/cert-expiration-123629/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 18:46:27.126407  225945 cli_runner.go:164] Run: docker container inspect cert-expiration-123629 --format={{.State.Status}}
	I1221 18:46:27.162567  225945 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 18:46:27.162578  225945 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-123629 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 18:46:27.238490  225945 cli_runner.go:164] Run: docker container inspect cert-expiration-123629 --format={{.State.Status}}
	I1221 18:46:27.259346  225945 machine.go:88] provisioning docker machine ...
	I1221 18:46:27.259378  225945 ubuntu.go:169] provisioning hostname "cert-expiration-123629"
	I1221 18:46:27.259437  225945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-123629
	I1221 18:46:27.278638  225945 main.go:141] libmachine: Using SSH client type: native
	I1221 18:46:27.279055  225945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I1221 18:46:27.279066  225945 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-123629 && echo "cert-expiration-123629" | sudo tee /etc/hostname
	I1221 18:46:27.513451  225945 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-123629
	
	I1221 18:46:27.513539  225945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-123629
	I1221 18:46:27.540622  225945 main.go:141] libmachine: Using SSH client type: native
	I1221 18:46:27.541009  225945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I1221 18:46:27.541025  225945 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-123629' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-123629/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-123629' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 18:46:27.692838  225945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1221 18:46:27.692854  225945 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17848-2360/.minikube CaCertPath:/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17848-2360/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17848-2360/.minikube}
	I1221 18:46:27.692878  225945 ubuntu.go:177] setting up certificates
	I1221 18:46:27.692885  225945 provision.go:83] configureAuth start
	I1221 18:46:27.692949  225945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-123629
	I1221 18:46:27.711660  225945 provision.go:138] copyHostCerts
	I1221 18:46:27.711708  225945 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-2360/.minikube/ca.pem, removing ...
	I1221 18:46:27.711715  225945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-2360/.minikube/ca.pem
	I1221 18:46:27.711774  225945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17848-2360/.minikube/ca.pem (1082 bytes)
	I1221 18:46:27.711851  225945 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-2360/.minikube/cert.pem, removing ...
	I1221 18:46:27.711854  225945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-2360/.minikube/cert.pem
	I1221 18:46:27.711881  225945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17848-2360/.minikube/cert.pem (1123 bytes)
	I1221 18:46:27.711928  225945 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-2360/.minikube/key.pem, removing ...
	I1221 18:46:27.711931  225945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-2360/.minikube/key.pem
	I1221 18:46:27.711953  225945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17848-2360/.minikube/key.pem (1675 bytes)
	I1221 18:46:27.711991  225945 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17848-2360/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-123629 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube cert-expiration-123629]
	I1221 18:46:28.454867  225945 provision.go:172] copyRemoteCerts
	I1221 18:46:28.454918  225945 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 18:46:28.454964  225945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-123629
	I1221 18:46:28.472653  225945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/cert-expiration-123629/id_rsa Username:docker}
	I1221 18:46:28.578673  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1221 18:46:28.607772  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1221 18:46:28.635109  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 18:46:28.663435  225945 provision.go:86] duration metric: configureAuth took 970.518214ms
	I1221 18:46:28.663452  225945 ubuntu.go:193] setting minikube options for container-runtime
	I1221 18:46:28.663647  225945 config.go:182] Loaded profile config "cert-expiration-123629": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1221 18:46:28.663708  225945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-123629
	I1221 18:46:28.681911  225945 main.go:141] libmachine: Using SSH client type: native
	I1221 18:46:28.682314  225945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I1221 18:46:28.682323  225945 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1221 18:46:28.833000  225945 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1221 18:46:28.833011  225945 ubuntu.go:71] root file system type: overlay
	I1221 18:46:28.833140  225945 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1221 18:46:28.833203  225945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-123629
	I1221 18:46:28.852025  225945 main.go:141] libmachine: Using SSH client type: native
	I1221 18:46:28.852441  225945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I1221 18:46:28.852514  225945 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1221 18:46:29.013757  225945 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1221 18:46:29.013840  225945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-123629
	I1221 18:46:29.032447  225945 main.go:141] libmachine: Using SSH client type: native
	I1221 18:46:29.032845  225945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I1221 18:46:29.032861  225945 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1221 18:46:29.861483  225945 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-12-21 18:46:29.007763669 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1221 18:46:29.861505  225945 machine.go:91] provisioned docker machine in 2.602147724s
	I1221 18:46:29.861513  225945 client.go:171] LocalClient.Create took 8.435089596s
	I1221 18:46:29.861538  225945 start.go:167] duration metric: libmachine.API.Create for "cert-expiration-123629" took 8.435139754s
	I1221 18:46:29.861545  225945 start.go:300] post-start starting for "cert-expiration-123629" (driver="docker")
	I1221 18:46:29.861553  225945 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 18:46:29.861613  225945 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 18:46:29.861670  225945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-123629
	I1221 18:46:29.885488  225945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/cert-expiration-123629/id_rsa Username:docker}
	I1221 18:46:29.994288  225945 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 18:46:29.998725  225945 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 18:46:29.998750  225945 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1221 18:46:29.998760  225945 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1221 18:46:29.998765  225945 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1221 18:46:29.998776  225945 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-2360/.minikube/addons for local assets ...
	I1221 18:46:29.998826  225945 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-2360/.minikube/files for local assets ...
	I1221 18:46:29.998908  225945 filesync.go:149] local asset: /home/jenkins/minikube-integration/17848-2360/.minikube/files/etc/ssl/certs/76602.pem -> 76602.pem in /etc/ssl/certs
	I1221 18:46:29.999000  225945 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 18:46:30.009398  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/files/etc/ssl/certs/76602.pem --> /etc/ssl/certs/76602.pem (1708 bytes)
	I1221 18:46:30.041253  225945 start.go:303] post-start completed in 179.693452ms
	I1221 18:46:30.041631  225945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-123629
	I1221 18:46:30.061384  225945 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/config.json ...
	I1221 18:46:30.061682  225945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 18:46:30.061721  225945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-123629
	I1221 18:46:30.083139  225945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/cert-expiration-123629/id_rsa Username:docker}
	I1221 18:46:30.189347  225945 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 18:46:30.195395  225945 start.go:128] duration metric: createHost completed in 8.77159891s
	I1221 18:46:30.195410  225945 start.go:83] releasing machines lock for "cert-expiration-123629", held for 8.771720208s
	I1221 18:46:30.195501  225945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-123629
	I1221 18:46:30.215038  225945 ssh_runner.go:195] Run: cat /version.json
	I1221 18:46:30.215085  225945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-123629
	I1221 18:46:30.215374  225945 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 18:46:30.215442  225945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-123629
	I1221 18:46:30.237086  225945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/cert-expiration-123629/id_rsa Username:docker}
	I1221 18:46:30.244771  225945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/cert-expiration-123629/id_rsa Username:docker}
	I1221 18:46:30.471014  225945 ssh_runner.go:195] Run: systemctl --version
	I1221 18:46:30.476657  225945 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1221 18:46:30.482370  225945 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1221 18:46:30.513137  225945 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1221 18:46:30.513206  225945 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:46:30.546560  225945 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1221 18:46:30.546578  225945 start.go:475] detecting cgroup driver to use...
	I1221 18:46:30.546608  225945 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1221 18:46:30.546703  225945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 18:46:30.566787  225945 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1221 18:46:30.578965  225945 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1221 18:46:30.590913  225945 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1221 18:46:30.590979  225945 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1221 18:46:30.603036  225945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1221 18:46:30.614670  225945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1221 18:46:30.626384  225945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1221 18:46:30.637884  225945 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 18:46:30.648540  225945 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1221 18:46:30.660065  225945 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 18:46:30.670248  225945 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 18:46:30.680066  225945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:46:30.771930  225945 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1221 18:46:30.889485  225945 start.go:475] detecting cgroup driver to use...
	I1221 18:46:30.889527  225945 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1221 18:46:30.889584  225945 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1221 18:46:30.906116  225945 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1221 18:46:30.906171  225945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1221 18:46:30.921139  225945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 18:46:30.941017  225945 ssh_runner.go:195] Run: which cri-dockerd
	I1221 18:46:30.945916  225945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1221 18:46:30.956294  225945 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1221 18:46:30.987708  225945 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1221 18:46:31.097236  225945 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1221 18:46:31.215236  225945 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1221 18:46:31.215381  225945 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1221 18:46:31.240011  225945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:46:31.338823  225945 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1221 18:46:31.606804  225945 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1221 18:46:31.704391  225945 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1221 18:46:31.799784  225945 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1221 18:46:31.891283  225945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:46:31.994346  225945 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1221 18:46:32.010634  225945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:46:32.112321  225945 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1221 18:46:32.193005  225945 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1221 18:46:32.193068  225945 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1221 18:46:32.197860  225945 start.go:543] Will wait 60s for crictl version
	I1221 18:46:32.197913  225945 ssh_runner.go:195] Run: which crictl
	I1221 18:46:32.203000  225945 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1221 18:46:32.263941  225945 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1221 18:46:32.264001  225945 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1221 18:46:32.289974  225945 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1221 18:46:32.318392  225945 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1221 18:46:32.318485  225945 cli_runner.go:164] Run: docker network inspect cert-expiration-123629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:46:32.336163  225945 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1221 18:46:32.340531  225945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:46:32.353285  225945 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1221 18:46:32.353343  225945 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1221 18:46:32.373193  225945 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1221 18:46:32.373209  225945 docker.go:601] Images already preloaded, skipping extraction
	I1221 18:46:32.373278  225945 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1221 18:46:32.392944  225945 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1221 18:46:32.392958  225945 cache_images.go:84] Images are preloaded, skipping loading
	I1221 18:46:32.393016  225945 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1221 18:46:32.451782  225945 cni.go:84] Creating CNI manager for ""
	I1221 18:46:32.451796  225945 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1221 18:46:32.451813  225945 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1221 18:46:32.451831  225945 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-123629 NodeName:cert-expiration-123629 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 18:46:32.451961  225945 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "cert-expiration-123629"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 18:46:32.452016  225945 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=cert-expiration-123629 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-123629 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1221 18:46:32.452070  225945 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1221 18:46:32.462320  225945 binaries.go:44] Found k8s binaries, skipping transfer
	I1221 18:46:32.462378  225945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 18:46:32.472055  225945 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I1221 18:46:32.491870  225945 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 18:46:32.515419  225945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1221 18:46:32.535878  225945 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1221 18:46:32.540011  225945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:46:32.553021  225945 certs.go:56] Setting up /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629 for IP: 192.168.67.2
	I1221 18:46:32.553044  225945 certs.go:190] acquiring lock for shared ca certs: {Name:mke521584ecf21f65224996fffab5af98b398a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:46:32.553178  225945 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17848-2360/.minikube/ca.key
	I1221 18:46:32.553219  225945 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.key
	I1221 18:46:32.553264  225945 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/client.key
	I1221 18:46:32.553273  225945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/client.crt with IP's: []
	I1221 18:46:32.664060  225945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/client.crt ...
	I1221 18:46:32.664074  225945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/client.crt: {Name:mk60a9c8239f27e7c176fba03fd5cf5275121cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:46:32.664668  225945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/client.key ...
	I1221 18:46:32.664679  225945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/client.key: {Name:mk027a82ffe4caeb4ce60140a03838fab50530a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:46:32.664777  225945 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/apiserver.key.c7fa3a9e
	I1221 18:46:32.664789  225945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1221 18:46:33.007895  225945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/apiserver.crt.c7fa3a9e ...
	I1221 18:46:33.007910  225945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/apiserver.crt.c7fa3a9e: {Name:mkf3295816d3adbc18ac95ed2f8cd85f7f9dfe97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:46:33.008776  225945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/apiserver.key.c7fa3a9e ...
	I1221 18:46:33.008797  225945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/apiserver.key.c7fa3a9e: {Name:mk193efb804494745216292e15d95d99b4c93825 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:46:33.009296  225945 certs.go:337] copying /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/apiserver.crt
	I1221 18:46:33.009375  225945 certs.go:341] copying /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/apiserver.key
	I1221 18:46:33.009426  225945 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/proxy-client.key
	I1221 18:46:33.009438  225945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/proxy-client.crt with IP's: []
	I1221 18:46:33.323927  225945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/proxy-client.crt ...
	I1221 18:46:33.323942  225945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/proxy-client.crt: {Name:mkdd5d10098a7e645a2d20236db8dc8018ad2e4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:46:33.324741  225945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/proxy-client.key ...
	I1221 18:46:33.324752  225945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/proxy-client.key: {Name:mk3067596595ae31d6ce4599092674e513f44472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:46:33.325501  225945 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/7660.pem (1338 bytes)
	W1221 18:46:33.325537  225945 certs.go:433] ignoring /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/7660_empty.pem, impossibly tiny 0 bytes
	I1221 18:46:33.325546  225945 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 18:46:33.325574  225945 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/ca.pem (1082 bytes)
	I1221 18:46:33.325598  225945 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/cert.pem (1123 bytes)
	I1221 18:46:33.325621  225945 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/certs/home/jenkins/minikube-integration/17848-2360/.minikube/certs/key.pem (1675 bytes)
	I1221 18:46:33.325663  225945 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-2360/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17848-2360/.minikube/files/etc/ssl/certs/76602.pem (1708 bytes)
	I1221 18:46:33.326281  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1221 18:46:33.354484  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 18:46:33.381902  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 18:46:33.409385  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/cert-expiration-123629/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1221 18:46:33.436504  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 18:46:33.464299  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1221 18:46:33.492197  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 18:46:33.519867  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1221 18:46:33.547178  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 18:46:33.575314  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/certs/7660.pem --> /usr/share/ca-certificates/7660.pem (1338 bytes)
	I1221 18:46:33.603186  225945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-2360/.minikube/files/etc/ssl/certs/76602.pem --> /usr/share/ca-certificates/76602.pem (1708 bytes)
	I1221 18:46:33.630794  225945 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1221 18:46:33.651238  225945 ssh_runner.go:195] Run: openssl version
	I1221 18:46:33.658179  225945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76602.pem && ln -fs /usr/share/ca-certificates/76602.pem /etc/ssl/certs/76602.pem"
	I1221 18:46:33.669918  225945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76602.pem
	I1221 18:46:33.674210  225945 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 21 18:09 /usr/share/ca-certificates/76602.pem
	I1221 18:46:33.674280  225945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76602.pem
	I1221 18:46:33.682688  225945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76602.pem /etc/ssl/certs/3ec20f2e.0"
	I1221 18:46:33.694036  225945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1221 18:46:33.705402  225945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:46:33.709711  225945 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 21 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:46:33.709764  225945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:46:33.718279  225945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1221 18:46:33.729601  225945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7660.pem && ln -fs /usr/share/ca-certificates/7660.pem /etc/ssl/certs/7660.pem"
	I1221 18:46:33.741006  225945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7660.pem
	I1221 18:46:33.745374  225945 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 21 18:09 /usr/share/ca-certificates/7660.pem
	I1221 18:46:33.745425  225945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7660.pem
	I1221 18:46:33.753739  225945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7660.pem /etc/ssl/certs/51391683.0"
	I1221 18:46:33.765224  225945 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1221 18:46:33.769362  225945 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1221 18:46:33.769404  225945 kubeadm.go:404] StartCluster: {Name:cert-expiration-123629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-123629 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:46:33.769514  225945 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1221 18:46:33.788889  225945 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 18:46:33.799460  225945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 18:46:33.809734  225945 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1221 18:46:33.809784  225945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 18:46:33.820189  225945 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 18:46:33.820220  225945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 18:46:33.873098  225945 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1221 18:46:33.873384  225945 kubeadm.go:322] [preflight] Running pre-flight checks
	I1221 18:46:33.937222  225945 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1221 18:46:33.937295  225945 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1221 18:46:33.937327  225945 kubeadm.go:322] OS: Linux
	I1221 18:46:33.937373  225945 kubeadm.go:322] CGROUPS_CPU: enabled
	I1221 18:46:33.937420  225945 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1221 18:46:33.937464  225945 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1221 18:46:33.937508  225945 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1221 18:46:33.937553  225945 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1221 18:46:33.937597  225945 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1221 18:46:33.937638  225945 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1221 18:46:33.937683  225945 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1221 18:46:33.937725  225945 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1221 18:46:34.020687  225945 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 18:46:34.020819  225945 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 18:46:34.020919  225945 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1221 18:46:34.354050  225945 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 18:46:34.359281  225945 out.go:204]   - Generating certificates and keys ...
	I1221 18:46:34.359471  225945 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1221 18:46:34.359541  225945 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1221 18:46:34.569575  225945 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 18:46:35.234880  225945 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1221 18:46:35.649368  225945 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1221 18:46:36.101841  225945 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1221 18:46:36.445358  225945 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1221 18:46:36.445537  225945 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-123629 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1221 18:46:37.533383  225945 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1221 18:46:37.533733  225945 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-123629 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1221 18:46:37.779403  225945 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 18:46:38.066955  225945 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 18:46:38.956060  225945 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1221 18:46:38.956567  225945 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 18:46:39.390464  225945 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 18:46:40.300460  225945 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 18:46:40.484802  225945 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 18:46:40.979752  225945 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 18:46:40.980574  225945 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 18:46:40.983444  225945 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 18:46:40.987631  225945 out.go:204]   - Booting up control plane ...
	I1221 18:46:40.987753  225945 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 18:46:40.987833  225945 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 18:46:40.987906  225945 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 18:46:41.002098  225945 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 18:46:41.002768  225945 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 18:46:41.002931  225945 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1221 18:46:41.110599  225945 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1221 18:46:49.115968  225945 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005839 seconds
	I1221 18:46:49.116075  225945 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 18:46:49.129572  225945 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 18:46:49.655719  225945 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 18:46:49.656199  225945 kubeadm.go:322] [mark-control-plane] Marking the node cert-expiration-123629 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1221 18:46:50.167332  225945 kubeadm.go:322] [bootstrap-token] Using token: 07mied.fi6vhsdspv6ye97u
	I1221 18:46:50.169230  225945 out.go:204]   - Configuring RBAC rules ...
	I1221 18:46:50.169358  225945 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 18:46:50.177392  225945 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 18:46:50.185047  225945 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 18:46:50.188738  225945 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 18:46:50.192128  225945 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 18:46:50.195606  225945 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 18:46:50.208042  225945 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 18:46:50.456585  225945 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1221 18:46:50.584928  225945 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1221 18:46:50.585915  225945 kubeadm.go:322] 
	I1221 18:46:50.585985  225945 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1221 18:46:50.585990  225945 kubeadm.go:322] 
	I1221 18:46:50.586061  225945 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1221 18:46:50.586065  225945 kubeadm.go:322] 
	I1221 18:46:50.586088  225945 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1221 18:46:50.586149  225945 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 18:46:50.586202  225945 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 18:46:50.586205  225945 kubeadm.go:322] 
	I1221 18:46:50.586255  225945 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1221 18:46:50.586258  225945 kubeadm.go:322] 
	I1221 18:46:50.586302  225945 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1221 18:46:50.586305  225945 kubeadm.go:322] 
	I1221 18:46:50.586354  225945 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1221 18:46:50.586424  225945 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 18:46:50.586487  225945 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 18:46:50.586491  225945 kubeadm.go:322] 
	I1221 18:46:50.586569  225945 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1221 18:46:50.586639  225945 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1221 18:46:50.586643  225945 kubeadm.go:322] 
	I1221 18:46:50.586720  225945 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 07mied.fi6vhsdspv6ye97u \
	I1221 18:46:50.586815  225945 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2f6b4ffdbf866a02d45b3983f1bb1aea5de717f3ff658b4572e7c4ad93c2235b \
	I1221 18:46:50.586833  225945 kubeadm.go:322] 	--control-plane 
	I1221 18:46:50.586837  225945 kubeadm.go:322] 
	I1221 18:46:50.586915  225945 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1221 18:46:50.586918  225945 kubeadm.go:322] 
	I1221 18:46:50.586994  225945 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 07mied.fi6vhsdspv6ye97u \
	I1221 18:46:50.587088  225945 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2f6b4ffdbf866a02d45b3983f1bb1aea5de717f3ff658b4572e7c4ad93c2235b 
	I1221 18:46:50.590364  225945 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1221 18:46:50.590465  225945 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 18:46:50.590488  225945 cni.go:84] Creating CNI manager for ""
	I1221 18:46:50.590503  225945 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1221 18:46:50.593198  225945 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1221 18:46:50.595232  225945 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1221 18:46:50.609361  225945 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1221 18:46:50.654717  225945 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1221 18:46:50.654845  225945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:46:50.654913  225945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=053db14b71765e8eac0607e1192d5903e3b3dcea minikube.k8s.io/name=cert-expiration-123629 minikube.k8s.io/updated_at=2023_12_21T18_46_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:46:50.982862  225945 ops.go:34] apiserver oom_adj: -16
	I1221 18:46:50.982894  225945 kubeadm.go:1088] duration metric: took 328.096047ms to wait for elevateKubeSystemPrivileges.
	I1221 18:46:50.982906  225945 kubeadm.go:406] StartCluster complete in 17.213505161s
	I1221 18:46:50.982921  225945 settings.go:142] acquiring lock: {Name:mk8f5959956e96f0518268d8a4693f16253e6146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:46:50.983007  225945 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17848-2360/kubeconfig
	I1221 18:46:50.983962  225945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-2360/kubeconfig: {Name:mkd5570705146782261fe0b7e76619864f470748 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:46:50.987131  225945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1221 18:46:50.987445  225945 config.go:182] Loaded profile config "cert-expiration-123629": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1221 18:46:50.987485  225945 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1221 18:46:50.987588  225945 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-123629"
	I1221 18:46:50.987601  225945 addons.go:237] Setting addon storage-provisioner=true in "cert-expiration-123629"
	I1221 18:46:50.987638  225945 host.go:66] Checking if "cert-expiration-123629" exists ...
	I1221 18:46:50.988168  225945 cli_runner.go:164] Run: docker container inspect cert-expiration-123629 --format={{.State.Status}}
	I1221 18:46:50.990109  225945 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-123629"
	I1221 18:46:50.990135  225945 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-123629"
	I1221 18:46:50.990462  225945 cli_runner.go:164] Run: docker container inspect cert-expiration-123629 --format={{.State.Status}}
	I1221 18:46:51.051443  225945 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:46:51.053179  225945 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 18:46:51.053190  225945 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 18:46:51.053259  225945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-123629
	I1221 18:46:51.081104  225945 addons.go:237] Setting addon default-storageclass=true in "cert-expiration-123629"
	I1221 18:46:51.081132  225945 host.go:66] Checking if "cert-expiration-123629" exists ...
	I1221 18:46:51.081609  225945 cli_runner.go:164] Run: docker container inspect cert-expiration-123629 --format={{.State.Status}}
	I1221 18:46:51.091291  225945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/cert-expiration-123629/id_rsa Username:docker}
	I1221 18:46:51.113239  225945 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 18:46:51.113251  225945 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 18:46:51.113315  225945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-123629
	I1221 18:46:51.143191  225945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/cert-expiration-123629/id_rsa Username:docker}
	I1221 18:46:51.220113  225945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1221 18:46:51.263198  225945 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 18:46:51.338990  225945 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 18:46:51.515284  225945 kapi.go:248] "coredns" deployment in "kube-system" namespace and "cert-expiration-123629" context rescaled to 1 replicas
	I1221 18:46:51.515327  225945 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1221 18:46:51.517486  225945 out.go:177] * Verifying Kubernetes components...
	I1221 18:46:51.519410  225945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:46:52.345814  225945 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.125677786s)
	I1221 18:46:52.345831  225945 start.go:929] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I1221 18:46:52.516959  225945 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.177945021s)
	I1221 18:46:52.517160  225945 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.253947787s)
	I1221 18:46:52.517999  225945 api_server.go:52] waiting for apiserver process to appear ...
	I1221 18:46:52.518061  225945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 18:46:52.528391  225945 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1221 18:46:52.530159  225945 addons.go:508] enable addons completed in 1.542673164s: enabled=[storage-provisioner default-storageclass]
	I1221 18:46:52.536803  225945 api_server.go:72] duration metric: took 1.021422795s to wait for apiserver process to appear ...
	I1221 18:46:52.536813  225945 api_server.go:88] waiting for apiserver healthz status ...
	I1221 18:46:52.536830  225945 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1221 18:46:52.546375  225945 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1221 18:46:52.547675  225945 api_server.go:141] control plane version: v1.28.4
	I1221 18:46:52.547688  225945 api_server.go:131] duration metric: took 10.869803ms to wait for apiserver health ...
	I1221 18:46:52.547695  225945 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 18:46:52.553956  225945 system_pods.go:59] 5 kube-system pods found
	I1221 18:46:52.553973  225945 system_pods.go:61] "etcd-cert-expiration-123629" [0aa88cbe-5664-46d2-9e65-fc8207879e61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 18:46:52.553981  225945 system_pods.go:61] "kube-apiserver-cert-expiration-123629" [cd388a4c-bfe6-4d43-8b3c-36d54ae0074a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 18:46:52.553989  225945 system_pods.go:61] "kube-controller-manager-cert-expiration-123629" [97a82112-e0de-4cea-8091-07121242f3a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 18:46:52.553997  225945 system_pods.go:61] "kube-scheduler-cert-expiration-123629" [fed0aa7c-e23b-4cfe-8364-daa4e95d46ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 18:46:52.554008  225945 system_pods.go:61] "storage-provisioner" [5dd0cd97-595c-4756-99ea-68dc0e50df65] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1221 18:46:52.554013  225945 system_pods.go:74] duration metric: took 6.313917ms to wait for pod list to return data ...
	I1221 18:46:52.554021  225945 kubeadm.go:581] duration metric: took 1.038643684s to wait for : map[apiserver:true system_pods:true] ...
	I1221 18:46:52.554031  225945 node_conditions.go:102] verifying NodePressure condition ...
	I1221 18:46:52.557014  225945 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1221 18:46:52.557028  225945 node_conditions.go:123] node cpu capacity is 2
	I1221 18:46:52.557037  225945 node_conditions.go:105] duration metric: took 3.002877ms to run NodePressure ...
	I1221 18:46:52.557054  225945 start.go:228] waiting for startup goroutines ...
	I1221 18:46:52.557059  225945 start.go:233] waiting for cluster config update ...
	I1221 18:46:52.557068  225945 start.go:242] writing updated cluster config ...
	I1221 18:46:52.557330  225945 ssh_runner.go:195] Run: rm -f paused
	I1221 18:46:52.622301  225945 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1221 18:46:52.624503  225945 out.go:177] * Done! kubectl is now configured to use "cert-expiration-123629" cluster and "default" namespace by default
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p stopped-upgrade-203518"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.17.0 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.18s)

                                                
                                    

Test pass (300/331)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.97
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.4/json-events 8.87
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 11.16
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
23 TestDownloadOnly/DeleteAll 0.23
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
26 TestBinaryMirror 0.62
27 TestOffline 98.63
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.11
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
32 TestAddons/Setup 149.34
34 TestAddons/parallel/Registry 16.13
36 TestAddons/parallel/InspektorGadget 10.95
37 TestAddons/parallel/MetricsServer 5.97
40 TestAddons/parallel/CSI 69.42
41 TestAddons/parallel/Headlamp 11.79
42 TestAddons/parallel/CloudSpanner 6.59
43 TestAddons/parallel/LocalPath 52.6
44 TestAddons/parallel/NvidiaDevicePlugin 5.56
45 TestAddons/parallel/Yakd 6
48 TestAddons/serial/GCPAuth/Namespaces 0.18
49 TestAddons/StoppedEnableDisable 11.08
50 TestCertOptions 40.98
51 TestCertExpiration 245.37
52 TestDockerFlags 34.47
53 TestForceSystemdFlag 36.61
54 TestForceSystemdEnv 39.6
60 TestErrorSpam/setup 31.84
61 TestErrorSpam/start 0.9
62 TestErrorSpam/status 1.14
63 TestErrorSpam/pause 1.44
64 TestErrorSpam/unpause 1.57
65 TestErrorSpam/stop 11.04
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 50.93
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 34.45
72 TestFunctional/serial/KubeContext 0.07
73 TestFunctional/serial/KubectlGetPods 0.11
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.01
77 TestFunctional/serial/CacheCmd/cache/add_local 1.01
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
79 TestFunctional/serial/CacheCmd/cache/list 0.07
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.38
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
82 TestFunctional/serial/CacheCmd/cache/delete 0.15
83 TestFunctional/serial/MinikubeKubectlCmd 0.16
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
85 TestFunctional/serial/ExtraConfig 41.83
86 TestFunctional/serial/ComponentHealth 0.11
87 TestFunctional/serial/LogsCmd 1.26
88 TestFunctional/serial/LogsFileCmd 1.29
89 TestFunctional/serial/InvalidService 4.43
91 TestFunctional/parallel/ConfigCmd 0.7
92 TestFunctional/parallel/DashboardCmd 12.38
93 TestFunctional/parallel/DryRun 0.62
94 TestFunctional/parallel/InternationalLanguage 0.33
95 TestFunctional/parallel/StatusCmd 1.34
99 TestFunctional/parallel/ServiceCmdConnect 9.79
100 TestFunctional/parallel/AddonsCmd 0.21
101 TestFunctional/parallel/PersistentVolumeClaim 28.18
103 TestFunctional/parallel/SSHCmd 0.83
104 TestFunctional/parallel/CpCmd 2.68
106 TestFunctional/parallel/FileSync 0.42
107 TestFunctional/parallel/CertSync 2.61
111 TestFunctional/parallel/NodeLabels 0.11
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
115 TestFunctional/parallel/License 0.36
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.8
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.43
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ServiceCmd/DeployApp 7.32
128 TestFunctional/parallel/ServiceCmd/List 0.62
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
131 TestFunctional/parallel/ProfileCmd/profile_list 0.48
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.61
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.57
134 TestFunctional/parallel/ServiceCmd/Format 0.56
135 TestFunctional/parallel/MountCmd/any-port 8.38
136 TestFunctional/parallel/ServiceCmd/URL 0.58
137 TestFunctional/parallel/MountCmd/specific-port 2.56
138 TestFunctional/parallel/MountCmd/VerifyCleanup 2.19
139 TestFunctional/parallel/Version/short 0.12
140 TestFunctional/parallel/Version/components 1.12
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
145 TestFunctional/parallel/ImageCommands/ImageBuild 2.67
146 TestFunctional/parallel/ImageCommands/Setup 1.91
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
150 TestFunctional/parallel/DockerEnv/bash 1.5
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.56
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.06
153 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.01
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.83
155 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
156 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.31
157 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1
158 TestFunctional/delete_addon-resizer_images 0.09
159 TestFunctional/delete_my-image_image 0.02
160 TestFunctional/delete_minikube_cached_images 0.02
164 TestImageBuild/serial/Setup 31.07
165 TestImageBuild/serial/NormalBuild 1.7
166 TestImageBuild/serial/BuildWithBuildArg 0.89
167 TestImageBuild/serial/BuildWithDockerIgnore 0.73
168 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.73
171 TestIngressAddonLegacy/StartLegacyK8sCluster 105.39
173 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.43
174 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.68
178 TestJSONOutput/start/Command 88.82
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.63
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.55
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 10.88
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.26
203 TestKicCustomNetwork/create_custom_network 36.24
204 TestKicCustomNetwork/use_default_bridge_network 33.15
205 TestKicExistingNetwork 34.52
206 TestKicCustomSubnet 37.08
207 TestKicStaticIP 37.41
208 TestMainNoArgs 0.07
209 TestMinikubeProfile 74.98
212 TestMountStart/serial/StartWithMountFirst 8.24
213 TestMountStart/serial/VerifyMountFirst 0.3
214 TestMountStart/serial/StartWithMountSecond 8.05
215 TestMountStart/serial/VerifyMountSecond 0.31
216 TestMountStart/serial/DeleteFirst 1.49
217 TestMountStart/serial/VerifyMountPostDelete 0.31
218 TestMountStart/serial/Stop 1.23
219 TestMountStart/serial/RestartStopped 8.42
220 TestMountStart/serial/VerifyMountPostStop 0.29
223 TestMultiNode/serial/FreshStart2Nodes 79.26
224 TestMultiNode/serial/DeployApp2Nodes 42.78
225 TestMultiNode/serial/PingHostFrom2Pods 1.11
226 TestMultiNode/serial/AddNode 19.42
227 TestMultiNode/serial/MultiNodeLabels 0.1
228 TestMultiNode/serial/ProfileList 0.36
229 TestMultiNode/serial/CopyFile 11.64
230 TestMultiNode/serial/StopNode 2.4
231 TestMultiNode/serial/StartAfterStop 14.38
232 TestMultiNode/serial/RestartKeepsNodes 122.56
233 TestMultiNode/serial/DeleteNode 5.21
234 TestMultiNode/serial/StopMultiNode 21.74
235 TestMultiNode/serial/RestartMultiNode 86.52
236 TestMultiNode/serial/ValidateNameConflict 37.27
241 TestPreload 168.41
243 TestScheduledStopUnix 106.29
244 TestSkaffold 108.36
246 TestInsufficientStorage 14.06
247 TestRunningBinaryUpgrade 97.42
249 TestKubernetesUpgrade 141.55
250 TestMissingContainerUpgrade 198.82
252 TestPause/serial/Start 96.97
253 TestPause/serial/SecondStartNoReconfiguration 40.66
254 TestPause/serial/Pause 0.83
255 TestPause/serial/VerifyStatus 0.48
256 TestPause/serial/Unpause 0.76
257 TestPause/serial/PauseAgain 1.27
258 TestPause/serial/DeletePaused 3.35
259 TestPause/serial/VerifyDeletedResources 0.21
260 TestStoppedBinaryUpgrade/Setup 1.22
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
271 TestNoKubernetes/serial/StartWithK8s 33.92
272 TestNoKubernetes/serial/StartWithStopK8s 17.12
273 TestNoKubernetes/serial/Start 8.06
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
275 TestNoKubernetes/serial/ProfileList 0.66
276 TestNoKubernetes/serial/Stop 1.23
277 TestNoKubernetes/serial/StartNoArgs 7.85
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
292 TestStartStop/group/old-k8s-version/serial/FirstStart 137.94
294 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.69
295 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.44
296 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
297 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.02
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 568.47
300 TestStartStop/group/old-k8s-version/serial/DeployApp 9.51
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.95
302 TestStartStop/group/old-k8s-version/serial/Stop 10.88
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
304 TestStartStop/group/old-k8s-version/serial/SecondStart 417.46
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
308 TestStartStop/group/old-k8s-version/serial/Pause 3.3
310 TestStartStop/group/embed-certs/serial/FirstStart 54.52
311 TestStartStop/group/embed-certs/serial/DeployApp 8.39
312 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
313 TestStartStop/group/embed-certs/serial/Stop 11.01
314 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
317 TestStartStop/group/embed-certs/serial/SecondStart 354.6
318 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
319 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.09
321 TestStartStop/group/no-preload/serial/FirstStart 59.48
322 TestStartStop/group/no-preload/serial/DeployApp 7.35
323 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
324 TestStartStop/group/no-preload/serial/Stop 10.98
325 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
326 TestStartStop/group/no-preload/serial/SecondStart 346.09
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
330 TestStartStop/group/embed-certs/serial/Pause 3.34
332 TestStartStop/group/newest-cni/serial/FirstStart 49
333 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
335 TestStartStop/group/newest-cni/serial/Stop 11.12
336 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
337 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.36
338 TestStartStop/group/newest-cni/serial/SecondStart 39.64
339 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
340 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
341 TestStartStop/group/no-preload/serial/Pause 3.09
342 TestNetworkPlugins/group/auto/Start 96.24
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
346 TestStartStop/group/newest-cni/serial/Pause 4.03
347 TestNetworkPlugins/group/flannel/Start 67.68
348 TestNetworkPlugins/group/flannel/ControllerPod 6.01
349 TestNetworkPlugins/group/auto/KubeletFlags 0.34
350 TestNetworkPlugins/group/auto/NetCatPod 12.27
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
352 TestNetworkPlugins/group/flannel/NetCatPod 10.29
353 TestNetworkPlugins/group/flannel/DNS 0.19
354 TestNetworkPlugins/group/flannel/Localhost 0.19
355 TestNetworkPlugins/group/flannel/HairPin 0.17
356 TestNetworkPlugins/group/auto/DNS 0.21
357 TestNetworkPlugins/group/auto/Localhost 0.18
358 TestNetworkPlugins/group/auto/HairPin 0.18
359 TestNetworkPlugins/group/calico/Start 96.29
360 TestNetworkPlugins/group/custom-flannel/Start 73.36
361 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.52
362 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.51
363 TestNetworkPlugins/group/custom-flannel/DNS 0.23
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.44
368 TestNetworkPlugins/group/calico/NetCatPod 12.49
369 TestNetworkPlugins/group/false/Start 92.51
370 TestNetworkPlugins/group/calico/DNS 0.25
371 TestNetworkPlugins/group/calico/Localhost 0.25
372 TestNetworkPlugins/group/calico/HairPin 0.23
373 TestNetworkPlugins/group/kindnet/Start 64.85
374 TestNetworkPlugins/group/false/KubeletFlags 0.51
375 TestNetworkPlugins/group/false/NetCatPod 10.4
376 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
377 TestNetworkPlugins/group/kindnet/KubeletFlags 0.45
378 TestNetworkPlugins/group/false/DNS 0.25
379 TestNetworkPlugins/group/false/Localhost 0.26
380 TestNetworkPlugins/group/kindnet/NetCatPod 9.34
381 TestNetworkPlugins/group/false/HairPin 0.27
382 TestNetworkPlugins/group/kindnet/DNS 0.28
383 TestNetworkPlugins/group/kindnet/Localhost 0.22
384 TestNetworkPlugins/group/kindnet/HairPin 0.3
385 TestNetworkPlugins/group/kubenet/Start 94.16
386 TestNetworkPlugins/group/enable-default-cni/Start 54.25
387 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
388 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
389 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
390 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
391 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
392 TestNetworkPlugins/group/kubenet/KubeletFlags 0.46
393 TestNetworkPlugins/group/kubenet/NetCatPod 11.35
394 TestNetworkPlugins/group/bridge/Start 92.54
395 TestNetworkPlugins/group/kubenet/DNS 0.22
396 TestNetworkPlugins/group/kubenet/Localhost 0.18
397 TestNetworkPlugins/group/kubenet/HairPin 0.18
398 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
399 TestNetworkPlugins/group/bridge/NetCatPod 10.25
400 TestNetworkPlugins/group/bridge/DNS 0.22
401 TestNetworkPlugins/group/bridge/Localhost 0.16
402 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (12.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-125953 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-125953 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.971286821s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-125953
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-125953: exit status 85 (88.886124ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-125953 | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |          |
	|         | -p download-only-125953        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/21 18:03:07
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 18:03:07.926386    7665 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:03:07.926603    7665 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:03:07.926630    7665 out.go:309] Setting ErrFile to fd 2...
	I1221 18:03:07.926649    7665 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:03:07.926934    7665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
	W1221 18:03:07.927141    7665 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17848-2360/.minikube/config/config.json: open /home/jenkins/minikube-integration/17848-2360/.minikube/config/config.json: no such file or directory
	I1221 18:03:07.927644    7665 out.go:303] Setting JSON to true
	I1221 18:03:07.928470    7665 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2735,"bootTime":1703179053,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1221 18:03:07.928564    7665 start.go:138] virtualization:  
	I1221 18:03:07.932052    7665 out.go:97] [download-only-125953] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	W1221 18:03:07.932286    7665 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball: no such file or directory
	I1221 18:03:07.934299    7665 out.go:169] MINIKUBE_LOCATION=17848
	I1221 18:03:07.932400    7665 notify.go:220] Checking for updates...
	I1221 18:03:07.938229    7665 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:03:07.941036    7665 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig
	I1221 18:03:07.943041    7665 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	I1221 18:03:07.945326    7665 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1221 18:03:07.949428    7665 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1221 18:03:07.949687    7665 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:03:07.972982    7665 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:03:07.973077    7665 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:03:08.369096    7665 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-12-21 18:03:08.359327603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:03:08.369217    7665 docker.go:295] overlay module found
	I1221 18:03:08.371497    7665 out.go:97] Using the docker driver based on user configuration
	I1221 18:03:08.371520    7665 start.go:298] selected driver: docker
	I1221 18:03:08.371526    7665 start.go:902] validating driver "docker" against <nil>
	I1221 18:03:08.371635    7665 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:03:08.445816    7665 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-12-21 18:03:08.436788605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:03:08.445966    7665 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1221 18:03:08.446255    7665 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1221 18:03:08.446444    7665 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1221 18:03:08.448911    7665 out.go:169] Using Docker driver with root privileges
	I1221 18:03:08.450678    7665 cni.go:84] Creating CNI manager for ""
	I1221 18:03:08.450702    7665 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1221 18:03:08.450714    7665 start_flags.go:323] config:
	{Name:download-only-125953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-125953 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:03:08.452785    7665 out.go:97] Starting control plane node download-only-125953 in cluster download-only-125953
	I1221 18:03:08.452803    7665 cache.go:121] Beginning downloading kic base image for docker with docker
	I1221 18:03:08.454685    7665 out.go:97] Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:03:08.454705    7665 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1221 18:03:08.454843    7665 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:03:08.471531    7665 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1221 18:03:08.471714    7665 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1221 18:03:08.471818    7665 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1221 18:03:08.574357    7665 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1221 18:03:08.574390    7665 cache.go:56] Caching tarball of preloaded images
	I1221 18:03:08.574544    7665 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1221 18:03:08.576697    7665 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1221 18:03:08.576722    7665 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1221 18:03:08.767153    7665 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-125953"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (8.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-125953 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-125953 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.871690894s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (8.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-125953
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-125953: exit status 85 (88.720248ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-125953 | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |          |
	|         | -p download-only-125953        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-125953 | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |          |
	|         | -p download-only-125953        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/21 18:03:20
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 18:03:20.991001    7739 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:03:20.991212    7739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:03:20.991241    7739 out.go:309] Setting ErrFile to fd 2...
	I1221 18:03:20.991261    7739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:03:20.991536    7739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
	W1221 18:03:20.991707    7739 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17848-2360/.minikube/config/config.json: open /home/jenkins/minikube-integration/17848-2360/.minikube/config/config.json: no such file or directory
	I1221 18:03:20.992001    7739 out.go:303] Setting JSON to true
	I1221 18:03:20.992720    7739 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2748,"bootTime":1703179053,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1221 18:03:20.992814    7739 start.go:138] virtualization:  
	I1221 18:03:20.995289    7739 out.go:97] [download-only-125953] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1221 18:03:20.997354    7739 out.go:169] MINIKUBE_LOCATION=17848
	I1221 18:03:20.995546    7739 notify.go:220] Checking for updates...
	I1221 18:03:21.001218    7739 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:03:21.002939    7739 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig
	I1221 18:03:21.004914    7739 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	I1221 18:03:21.006714    7739 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1221 18:03:21.010942    7739 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1221 18:03:21.011551    7739 config.go:182] Loaded profile config "download-only-125953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1221 18:03:21.011604    7739 start.go:810] api.Load failed for download-only-125953: filestore "download-only-125953": Docker machine "download-only-125953" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1221 18:03:21.011723    7739 driver.go:392] Setting default libvirt URI to qemu:///system
	W1221 18:03:21.011752    7739 start.go:810] api.Load failed for download-only-125953: filestore "download-only-125953": Docker machine "download-only-125953" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1221 18:03:21.035211    7739 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:03:21.035310    7739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:03:21.124792    7739 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-21 18:03:21.115024066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:03:21.124900    7739 docker.go:295] overlay module found
	I1221 18:03:21.127245    7739 out.go:97] Using the docker driver based on existing profile
	I1221 18:03:21.127271    7739 start.go:298] selected driver: docker
	I1221 18:03:21.127278    7739 start.go:902] validating driver "docker" against &{Name:download-only-125953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-125953 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:03:21.127560    7739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:03:21.199136    7739 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-21 18:03:21.189862098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:03:21.199633    7739 cni.go:84] Creating CNI manager for ""
	I1221 18:03:21.199660    7739 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1221 18:03:21.199677    7739 start_flags.go:323] config:
	{Name:download-only-125953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-125953 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GP
Us:}
	I1221 18:03:21.201578    7739 out.go:97] Starting control plane node download-only-125953 in cluster download-only-125953
	I1221 18:03:21.201600    7739 cache.go:121] Beginning downloading kic base image for docker with docker
	I1221 18:03:21.203567    7739 out.go:97] Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:03:21.203590    7739 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1221 18:03:21.203678    7739 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:03:21.220239    7739 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1221 18:03:21.220351    7739 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1221 18:03:21.220371    7739 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1221 18:03:21.220376    7739 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1221 18:03:21.220384    7739 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1221 18:03:21.285701    7739 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1221 18:03:21.285726    7739 cache.go:56] Caching tarball of preloaded images
	I1221 18:03:21.285881    7739 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1221 18:03:21.287950    7739 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1221 18:03:21.287972    7739 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I1221 18:03:21.438454    7739 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-125953"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (11.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-125953 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-125953 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.164532041s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (11.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-125953
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-125953: exit status 85 (87.078614ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-125953 | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |          |
	|         | -p download-only-125953           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-125953 | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |          |
	|         | -p download-only-125953           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-125953 | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |          |
	|         | -p download-only-125953           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/21 18:03:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 18:03:29.957228    7811 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:03:29.957413    7811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:03:29.957439    7811 out.go:309] Setting ErrFile to fd 2...
	I1221 18:03:29.957459    7811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:03:29.957732    7811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
	W1221 18:03:29.957867    7811 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17848-2360/.minikube/config/config.json: open /home/jenkins/minikube-integration/17848-2360/.minikube/config/config.json: no such file or directory
	I1221 18:03:29.958141    7811 out.go:303] Setting JSON to true
	I1221 18:03:29.958880    7811 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2757,"bootTime":1703179053,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1221 18:03:29.958977    7811 start.go:138] virtualization:  
	I1221 18:03:29.961269    7811 out.go:97] [download-only-125953] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1221 18:03:29.963642    7811 out.go:169] MINIKUBE_LOCATION=17848
	I1221 18:03:29.961547    7811 notify.go:220] Checking for updates...
	I1221 18:03:29.967569    7811 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:03:29.969405    7811 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig
	I1221 18:03:29.971045    7811 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	I1221 18:03:29.972747    7811 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1221 18:03:29.976096    7811 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1221 18:03:29.976602    7811 config.go:182] Loaded profile config "download-only-125953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1221 18:03:29.976648    7811 start.go:810] api.Load failed for download-only-125953: filestore "download-only-125953": Docker machine "download-only-125953" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1221 18:03:29.976763    7811 driver.go:392] Setting default libvirt URI to qemu:///system
	W1221 18:03:29.976793    7811 start.go:810] api.Load failed for download-only-125953: filestore "download-only-125953": Docker machine "download-only-125953" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1221 18:03:29.999182    7811 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:03:29.999267    7811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:03:30.083030    7811 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-21 18:03:30.073224367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:03:30.083133    7811 docker.go:295] overlay module found
	I1221 18:03:30.085458    7811 out.go:97] Using the docker driver based on existing profile
	I1221 18:03:30.085486    7811 start.go:298] selected driver: docker
	I1221 18:03:30.085494    7811 start.go:902] validating driver "docker" against &{Name:download-only-125953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-125953 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:03:30.085673    7811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:03:30.163545    7811 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-21 18:03:30.154423615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:03:30.164016    7811 cni.go:84] Creating CNI manager for ""
	I1221 18:03:30.164041    7811 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1221 18:03:30.164055    7811 start_flags.go:323] config:
	{Name:download-only-125953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-125953 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m
0s GPUs:}
	I1221 18:03:30.166035    7811 out.go:97] Starting control plane node download-only-125953 in cluster download-only-125953
	I1221 18:03:30.166066    7811 cache.go:121] Beginning downloading kic base image for docker with docker
	I1221 18:03:30.167722    7811 out.go:97] Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:03:30.167745    7811 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1221 18:03:30.167835    7811 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:03:30.186902    7811 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1221 18:03:30.187018    7811 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1221 18:03:30.187039    7811 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1221 18:03:30.187044    7811 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1221 18:03:30.187052    7811 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1221 18:03:30.251143    7811 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I1221 18:03:30.251181    7811 cache.go:56] Caching tarball of preloaded images
	I1221 18:03:30.251329    7811 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1221 18:03:30.253582    7811 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1221 18:03:30.253598    7811 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I1221 18:03:30.419746    7811 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:7f92af488e495f8b22ad9bc5e5eadd2f -> /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I1221 18:03:39.513583    7811 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I1221 18:03:39.513725    7811 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17848-2360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-125953"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-125953
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-911256 --alsologtostderr --binary-mirror http://127.0.0.1:42837 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-911256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-911256
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (98.63s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-338296 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-338296 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m36.332436118s)
helpers_test.go:175: Cleaning up "offline-docker-338296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-338296
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-338296: (2.298448066s)
--- PASS: TestOffline (98.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-203484
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-203484: exit status 85 (107.050303ms)

                                                
                                                
-- stdout --
	* Profile "addons-203484" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-203484"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-203484
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-203484: exit status 85 (96.068451ms)

                                                
                                                
-- stdout --
	* Profile "addons-203484" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-203484"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (149.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-203484 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-203484 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m29.336273205s)
--- PASS: TestAddons/Setup (149.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 42.663925ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-szhr4" [986c71e3-270d-41e4-9e7f-6efe46c2eb42] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004724603s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-h8kcp" [23424afa-efa8-4377-ae08-80f4b38577c2] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004739433s
addons_test.go:340: (dbg) Run:  kubectl --context addons-203484 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-203484 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-203484 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.960584557s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-203484 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-203484 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bzn7z" [4f47b6ed-5e4e-4748-813c-da3f9c10436d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004244293s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-203484
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-203484: (5.944447373s)
--- PASS: TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.97s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.538718ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-7twwp" [50b8d452-363a-43e1-97fa-ae09bd377626] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005049774s
addons_test.go:415: (dbg) Run:  kubectl --context addons-203484 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-203484 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 42.669258ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-203484 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/12/21 18:06:27 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-203484 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b0b474d8-f8f5-48fc-8a41-08e59cc0f42d] Pending
helpers_test.go:344: "task-pv-pod" [b0b474d8-f8f5-48fc-8a41-08e59cc0f42d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b0b474d8-f8f5-48fc-8a41-08e59cc0f42d] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004262861s
addons_test.go:584: (dbg) Run:  kubectl --context addons-203484 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-203484 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-203484 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-203484 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-203484 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-203484 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-203484 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f12ba229-68fa-495c-9a56-b2658aa11da5] Pending
helpers_test.go:344: "task-pv-pod-restore" [f12ba229-68fa-495c-9a56-b2658aa11da5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f12ba229-68fa-495c-9a56-b2658aa11da5] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004133633s
addons_test.go:626: (dbg) Run:  kubectl --context addons-203484 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-203484 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-203484 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-203484 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-203484 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.782132756s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-203484 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-linux-arm64 -p addons-203484 addons disable volumesnapshots --alsologtostderr -v=1: (1.178282187s)
--- PASS: TestAddons/parallel/CSI (69.42s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-203484 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-203484 --alsologtostderr -v=1: (1.780635042s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-qmwzh" [38ec1270-380f-4eaa-a143-10b5181eae27] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-qmwzh" [38ec1270-380f-4eaa-a143-10b5181eae27] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004512997s
--- PASS: TestAddons/parallel/Headlamp (11.79s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-nlxc9" [336eb77a-3bb6-4c8a-83a8-c84340ace3ec] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004242862s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-203484
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-203484 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-203484 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-203484 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3b3a89d3-aa35-4e03-9101-04ad1cf57f87] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3b3a89d3-aa35-4e03-9101-04ad1cf57f87] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3b3a89d3-aa35-4e03-9101-04ad1cf57f87] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.018244553s
addons_test.go:891: (dbg) Run:  kubectl --context addons-203484 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-203484 ssh "cat /opt/local-path-provisioner/pvc-421b46fb-639d-4baf-8168-b6718ed3e616_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-203484 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-203484 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-203484 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-203484 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.239572749s)
--- PASS: TestAddons/parallel/LocalPath (52.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-tx6g6" [037880b9-fb03-4c8d-9f30-d725cf9ea97b] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004283025s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-203484
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-p6t8w" [a9e6745a-0f08-46a7-bbcb-7ca3c297718c] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003348192s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-203484 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-203484 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-203484
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-203484: (10.735200656s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-203484
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-203484
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-203484
--- PASS: TestAddons/StoppedEnableDisable (11.08s)

                                                
                                    
x
+
TestCertOptions (40.98s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-706133 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-706133 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (37.874502184s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-706133 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-706133 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-706133 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-706133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-706133
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-706133: (2.224691343s)
--- PASS: TestCertOptions (40.98s)

                                                
                                    
x
+
TestCertExpiration (245.37s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-123629 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E1221 18:46:46.194945    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-123629 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (31.521329558s)
E1221 18:47:03.468978    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:47:13.878872    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-123629 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E1221 18:50:06.511909    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-123629 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (31.620709907s)
helpers_test.go:175: Cleaning up "cert-expiration-123629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-123629
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-123629: (2.230872264s)
--- PASS: TestCertExpiration (245.37s)

                                                
                                    
x
+
TestDockerFlags (34.47s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-715174 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-715174 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.672411144s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-715174 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-715174 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-715174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-715174
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-715174: (2.129696971s)
--- PASS: TestDockerFlags (34.47s)

                                                
                                    
x
+
TestForceSystemdFlag (36.61s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-143445 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1221 18:44:15.485722    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-143445 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.085392915s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-143445 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-143445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-143445
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-143445: (2.132915005s)
--- PASS: TestForceSystemdFlag (36.61s)

                                                
                                    
x
+
TestForceSystemdEnv (39.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-145625 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1221 18:45:47.310462    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:46:12.431083    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-145625 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.090787948s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-145625 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-145625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-145625
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-145625: (2.129477696s)
--- PASS: TestForceSystemdEnv (39.60s)

                                                
                                    
x
+
TestErrorSpam/setup (31.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-029612 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-029612 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-029612 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-029612 --driver=docker  --container-runtime=docker: (31.844442005s)
--- PASS: TestErrorSpam/setup (31.84s)

                                                
                                    
x
+
TestErrorSpam/start (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 start --dry-run
--- PASS: TestErrorSpam/start (0.90s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 pause
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 unpause
--- PASS: TestErrorSpam/unpause (1.57s)

                                                
                                    
x
+
TestErrorSpam/stop (11.04s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 stop: (10.80279292s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-029612 --log_dir /tmp/nospam-029612 stop
--- PASS: TestErrorSpam/stop (11.04s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17848-2360/.minikube/files/etc/test/nested/copy/7660/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-881514 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-881514 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (50.926354032s)
--- PASS: TestFunctional/serial/StartWithProxy (50.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-881514 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-881514 --alsologtostderr -v=8: (34.440137346s)
functional_test.go:659: soft start took 34.445073059s for "functional-881514" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-881514 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-881514 cache add registry.k8s.io/pause:3.1: (1.106906071s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-881514 cache add registry.k8s.io/pause:3.3: (1.044257175s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-881514 /tmp/TestFunctionalserialCacheCmdcacheadd_local1181212962/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 cache add minikube-local-cache-test:functional-881514
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 cache delete minikube-local-cache-test:functional-881514
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-881514
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-881514 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (348.355882ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 kubectl -- --context functional-881514 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-881514 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-881514 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1221 18:11:12.433667    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:11:12.440832    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:11:12.450980    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:11:12.471204    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:11:12.511828    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:11:12.592711    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:11:12.759425    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:11:13.079914    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:11:13.720486    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:11:15.000769    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:11:17.560970    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:11:22.681429    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:11:32.921977    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:11:53.402727    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-881514 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.826438327s)
functional_test.go:757: restart took 41.826552938s for "functional-881514" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-881514 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-881514 logs: (1.264239918s)
--- PASS: TestFunctional/serial/LogsCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 logs --file /tmp/TestFunctionalserialLogsFileCmd2275174708/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-881514 logs --file /tmp/TestFunctionalserialLogsFileCmd2275174708/001/logs.txt: (1.286611148s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-881514 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-881514
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-881514: exit status 115 (633.281467ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32216 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-881514 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-881514 config get cpus: exit status 14 (106.804017ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-881514 config get cpus: exit status 14 (171.191158ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-881514 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-881514 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 47050: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-881514 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-881514 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (280.82852ms)

                                                
                                                
-- stdout --
	* [functional-881514] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 18:12:36.210989   46668 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:12:36.211281   46668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:12:36.211309   46668 out.go:309] Setting ErrFile to fd 2...
	I1221 18:12:36.211329   46668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:12:36.211707   46668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
	I1221 18:12:36.212089   46668 out.go:303] Setting JSON to false
	I1221 18:12:36.213102   46668 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3304,"bootTime":1703179053,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1221 18:12:36.213194   46668 start.go:138] virtualization:  
	I1221 18:12:36.215819   46668 out.go:177] * [functional-881514] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1221 18:12:36.218181   46668 out.go:177]   - MINIKUBE_LOCATION=17848
	I1221 18:12:36.219784   46668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:12:36.218262   46668 notify.go:220] Checking for updates...
	I1221 18:12:36.221453   46668 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig
	I1221 18:12:36.223039   46668 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	I1221 18:12:36.224631   46668 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1221 18:12:36.226408   46668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 18:12:36.228446   46668 config.go:182] Loaded profile config "functional-881514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1221 18:12:36.228989   46668 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:12:36.277352   46668 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:12:36.277458   46668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:12:36.392731   46668 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-21 18:12:36.383427715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:12:36.392838   46668 docker.go:295] overlay module found
	I1221 18:12:36.395203   46668 out.go:177] * Using the docker driver based on existing profile
	I1221 18:12:36.396993   46668 start.go:298] selected driver: docker
	I1221 18:12:36.397019   46668 start.go:902] validating driver "docker" against &{Name:functional-881514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-881514 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:12:36.397182   46668 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 18:12:36.399634   46668 out.go:177] 
	W1221 18:12:36.401253   46668 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1221 18:12:36.402942   46668 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-881514 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-881514 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-881514 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (328.513601ms)

                                                
                                                
-- stdout --
	* [functional-881514] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 18:12:35.906154   46583 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:12:35.906365   46583 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:12:35.906375   46583 out.go:309] Setting ErrFile to fd 2...
	I1221 18:12:35.906382   46583 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:12:35.907610   46583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
	I1221 18:12:35.908044   46583 out.go:303] Setting JSON to false
	I1221 18:12:35.909009   46583 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3303,"bootTime":1703179053,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1221 18:12:35.909080   46583 start.go:138] virtualization:  
	I1221 18:12:35.914487   46583 out.go:177] * [functional-881514] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1221 18:12:35.916258   46583 out.go:177]   - MINIKUBE_LOCATION=17848
	I1221 18:12:35.918646   46583 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:12:35.916347   46583 notify.go:220] Checking for updates...
	I1221 18:12:35.920741   46583 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig
	I1221 18:12:35.922628   46583 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	I1221 18:12:35.924515   46583 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1221 18:12:35.926503   46583 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 18:12:35.928825   46583 config.go:182] Loaded profile config "functional-881514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1221 18:12:35.929405   46583 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:12:35.963431   46583 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:12:35.963601   46583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:12:36.109478   46583 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-21 18:12:36.098602981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:12:36.109585   46583 docker.go:295] overlay module found
	I1221 18:12:36.113312   46583 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1221 18:12:36.115257   46583 start.go:298] selected driver: docker
	I1221 18:12:36.115281   46583 start.go:902] validating driver "docker" against &{Name:functional-881514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-881514 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:12:36.115459   46583 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 18:12:36.117825   46583 out.go:177] 
	W1221 18:12:36.119870   46583 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1221 18:12:36.121726   46583 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-881514 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-881514 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-jgm4l" [b761c627-2c6d-4c52-84da-cf3f57060128] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-jgm4l" [b761c627-2c6d-4c52-84da-cf3f57060128] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.009003601s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30855
functional_test.go:1674: http://192.168.49.2:30855: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-jgm4l

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30855
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.79s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c78493b4-5ce2-48bb-9af2-b2fcde1f4604] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004479019s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-881514 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-881514 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-881514 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-881514 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [228c0712-d719-4da9-994d-20cab40fdb3f] Pending
helpers_test.go:344: "sp-pod" [228c0712-d719-4da9-994d-20cab40fdb3f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [228c0712-d719-4da9-994d-20cab40fdb3f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003490644s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-881514 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-881514 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-881514 delete -f testdata/storage-provisioner/pod.yaml: (1.075071997s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-881514 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9a4e6f84-b89e-4919-8ccd-c7c89e9177aa] Pending
helpers_test.go:344: "sp-pod" [9a4e6f84-b89e-4919-8ccd-c7c89e9177aa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9a4e6f84-b89e-4919-8ccd-c7c89e9177aa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004409765s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-881514 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh -n functional-881514 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 cp functional-881514:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1048812063/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh -n functional-881514 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh -n functional-881514 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/7660/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "sudo cat /etc/test/nested/copy/7660/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/7660.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "sudo cat /etc/ssl/certs/7660.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/7660.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "sudo cat /usr/share/ca-certificates/7660.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/76602.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "sudo cat /etc/ssl/certs/76602.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/76602.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "sudo cat /usr/share/ca-certificates/76602.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-881514 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-881514 ssh "sudo systemctl is-active crio": exit status 1 (385.663109ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-881514 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-881514 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-881514 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-881514 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 44020: os: process already finished
helpers_test.go:508: unable to kill pid 43858: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-881514 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-881514 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d70f6472-b96e-4cd4-88d7-c057ffe9fc98] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d70f6472-b96e-4cd4-88d7-c057ffe9fc98] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004053718s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-881514 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.71.245 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-881514 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-881514 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-881514 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-8qlpf" [5098df66-9591-4988-a5ba-01e526821448] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-8qlpf" [5098df66-9591-4988-a5ba-01e526821448] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004394531s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 service list -o json
functional_test.go:1493: Took "643.955081ms" to run "out/minikube-linux-arm64 -p functional-881514 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "372.085785ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "107.372588ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32229
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "472.252659ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "98.469254ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-881514 /tmp/TestFunctionalparallelMountCmdany-port716235362/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1703182353612639358" to /tmp/TestFunctionalparallelMountCmdany-port716235362/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1703182353612639358" to /tmp/TestFunctionalparallelMountCmdany-port716235362/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1703182353612639358" to /tmp/TestFunctionalparallelMountCmdany-port716235362/001/test-1703182353612639358
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-881514 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (600.395778ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E1221 18:12:34.363097    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 21 18:12 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 21 18:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 21 18:12 test-1703182353612639358
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh cat /mount-9p/test-1703182353612639358
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-881514 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [195f8f43-69eb-4e19-b221-8cd5711f5b36] Pending
helpers_test.go:344: "busybox-mount" [195f8f43-69eb-4e19-b221-8cd5711f5b36] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [195f8f43-69eb-4e19-b221-8cd5711f5b36] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [195f8f43-69eb-4e19-b221-8cd5711f5b36] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003817032s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-881514 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-881514 /tmp/TestFunctionalparallelMountCmdany-port716235362/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32229
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-881514 /tmp/TestFunctionalparallelMountCmdspecific-port1899600941/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-881514 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (705.909899ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-881514 /tmp/TestFunctionalparallelMountCmdspecific-port1899600941/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-881514 ssh "sudo umount -f /mount-9p": exit status 1 (388.928288ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-881514 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-881514 /tmp/TestFunctionalparallelMountCmdspecific-port1899600941/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-881514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2113806688/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-881514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2113806688/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-881514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2113806688/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-881514 ssh "findmnt -T" /mount1: (1.28639842s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-881514 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-881514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2113806688/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-881514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2113806688/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-881514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2113806688/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-arm64 -p functional-881514 version -o=json --components: (1.123280134s)
--- PASS: TestFunctional/parallel/Version/components (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-881514 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-881514
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-881514
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-881514 image ls --format short --alsologtostderr:
I1221 18:13:06.905067   49967 out.go:296] Setting OutFile to fd 1 ...
I1221 18:13:06.905273   49967 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:13:06.905279   49967 out.go:309] Setting ErrFile to fd 2...
I1221 18:13:06.905284   49967 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:13:06.905533   49967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
I1221 18:13:06.906271   49967 config.go:182] Loaded profile config "functional-881514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1221 18:13:06.906408   49967 config.go:182] Loaded profile config "functional-881514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1221 18:13:06.906963   49967 cli_runner.go:164] Run: docker container inspect functional-881514 --format={{.State.Status}}
I1221 18:13:06.924879   49967 ssh_runner.go:195] Run: systemctl --version
I1221 18:13:06.924940   49967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-881514
I1221 18:13:06.947970   49967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/functional-881514/id_rsa Username:docker}
I1221 18:13:07.058186   49967 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-881514 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | 74077e780ec71 | 43.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/minikube-local-cache-test | functional-881514 | 828815714fe50 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| gcr.io/google-containers/addon-resizer      | functional-881514 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| docker.io/library/nginx                     | latest            | 8aea65d81da20 | 192MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-881514 image ls --format table --alsologtostderr:
I1221 18:13:07.552283   50110 out.go:296] Setting OutFile to fd 1 ...
I1221 18:13:07.552404   50110 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:13:07.552476   50110 out.go:309] Setting ErrFile to fd 2...
I1221 18:13:07.552489   50110 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:13:07.552782   50110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
I1221 18:13:07.553396   50110 config.go:182] Loaded profile config "functional-881514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1221 18:13:07.553558   50110 config.go:182] Loaded profile config "functional-881514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1221 18:13:07.554043   50110 cli_runner.go:164] Run: docker container inspect functional-881514 --format={{.State.Status}}
I1221 18:13:07.577349   50110 ssh_runner.go:195] Run: systemctl --version
I1221 18:13:07.577405   50110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-881514
I1221 18:13:07.601942   50110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/functional-881514/id_rsa Username:docker}
I1221 18:13:07.708875   50110 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-881514 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"828815714fe505c5a4158e15fc93558f44dbf352d6ebf412d8
3262fdf3da0e26","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-881514"],"size":"30"},{"id":"74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43500000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDig
ests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-881514"],"size":"32900000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pau
se:3.9"],"size":"514000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-881514 image ls --format json --alsologtostderr:
I1221 18:13:07.244831   50038 out.go:296] Setting OutFile to fd 1 ...
I1221 18:13:07.245418   50038 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:13:07.245433   50038 out.go:309] Setting ErrFile to fd 2...
I1221 18:13:07.245441   50038 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:13:07.245748   50038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
I1221 18:13:07.247299   50038 config.go:182] Loaded profile config "functional-881514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1221 18:13:07.247540   50038 config.go:182] Loaded profile config "functional-881514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1221 18:13:07.248150   50038 cli_runner.go:164] Run: docker container inspect functional-881514 --format={{.State.Status}}
I1221 18:13:07.275725   50038 ssh_runner.go:195] Run: systemctl --version
I1221 18:13:07.275801   50038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-881514
I1221 18:13:07.303025   50038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/functional-881514/id_rsa Username:docker}
I1221 18:13:07.412970   50038 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-881514 image ls --format yaml --alsologtostderr:
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-881514
size: "32900000"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 828815714fe505c5a4158e15fc93558f44dbf352d6ebf412d83262fdf3da0e26
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-881514
size: "30"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-881514 image ls --format yaml --alsologtostderr:
I1221 18:13:06.878602   49966 out.go:296] Setting OutFile to fd 1 ...
I1221 18:13:06.878807   49966 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:13:06.878832   49966 out.go:309] Setting ErrFile to fd 2...
I1221 18:13:06.878850   49966 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:13:06.879139   49966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
I1221 18:13:06.879920   49966 config.go:182] Loaded profile config "functional-881514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1221 18:13:06.880120   49966 config.go:182] Loaded profile config "functional-881514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1221 18:13:06.880701   49966 cli_runner.go:164] Run: docker container inspect functional-881514 --format={{.State.Status}}
I1221 18:13:06.905982   49966 ssh_runner.go:195] Run: systemctl --version
I1221 18:13:06.906029   49966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-881514
I1221 18:13:06.926987   49966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/functional-881514/id_rsa Username:docker}
I1221 18:13:07.033214   49966 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-881514 ssh pgrep buildkitd: exit status 1 (399.741928ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image build -t localhost/my-image:functional-881514 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-881514 image build -t localhost/my-image:functional-881514 testdata/build --alsologtostderr: (2.034117382s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-881514 image build -t localhost/my-image:functional-881514 testdata/build --alsologtostderr:
I1221 18:13:07.548423   50115 out.go:296] Setting OutFile to fd 1 ...
I1221 18:13:07.548727   50115 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:13:07.548755   50115 out.go:309] Setting ErrFile to fd 2...
I1221 18:13:07.548774   50115 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:13:07.549141   50115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
I1221 18:13:07.549981   50115 config.go:182] Loaded profile config "functional-881514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1221 18:13:07.551854   50115 config.go:182] Loaded profile config "functional-881514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1221 18:13:07.552539   50115 cli_runner.go:164] Run: docker container inspect functional-881514 --format={{.State.Status}}
I1221 18:13:07.589144   50115 ssh_runner.go:195] Run: systemctl --version
I1221 18:13:07.589199   50115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-881514
I1221 18:13:07.614403   50115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/functional-881514/id_rsa Username:docker}
I1221 18:13:07.722395   50115 build_images.go:151] Building image from path: /tmp/build.2650113055.tar
I1221 18:13:07.722486   50115 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1221 18:13:07.741617   50115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2650113055.tar
I1221 18:13:07.750122   50115 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2650113055.tar: stat -c "%s %y" /var/lib/minikube/build/build.2650113055.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2650113055.tar': No such file or directory
I1221 18:13:07.750156   50115 ssh_runner.go:362] scp /tmp/build.2650113055.tar --> /var/lib/minikube/build/build.2650113055.tar (3072 bytes)
I1221 18:13:07.778105   50115 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2650113055
I1221 18:13:07.788137   50115 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2650113055 -xf /var/lib/minikube/build/build.2650113055.tar
I1221 18:13:07.798582   50115 docker.go:346] Building image: /var/lib/minikube/build/build.2650113055
I1221 18:13:07.798643   50115 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-881514 /var/lib/minikube/build/build.2650113055
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.7s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:21e8a7de2ca872322e49d4c2d6578a8a244f28c2094e558da591c6087afe388a done
#8 naming to localhost/my-image:functional-881514 done
#8 DONE 0.0s
I1221 18:13:09.469107   50115 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-881514 /var/lib/minikube/build/build.2650113055: (1.670440223s)
I1221 18:13:09.469196   50115 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2650113055
I1221 18:13:09.484189   50115 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2650113055.tar
I1221 18:13:09.495961   50115 build_images.go:207] Built localhost/my-image:functional-881514 from /tmp/build.2650113055.tar
I1221 18:13:09.495990   50115 build_images.go:123] succeeded building to: functional-881514
I1221 18:13:09.495995   50115 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2023/12/21 18:12:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.888411305s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-881514
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-881514 docker-env) && out/minikube-linux-arm64 status -p functional-881514"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-881514 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image load --daemon gcr.io/google-containers/addon-resizer:functional-881514 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-881514 image load --daemon gcr.io/google-containers/addon-resizer:functional-881514 --alsologtostderr: (4.280057461s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image load --daemon gcr.io/google-containers/addon-resizer:functional-881514 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-881514 image load --daemon gcr.io/google-containers/addon-resizer:functional-881514 --alsologtostderr: (2.794055937s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.41714173s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-881514
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image load --daemon gcr.io/google-containers/addon-resizer:functional-881514 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-881514 image load --daemon gcr.io/google-containers/addon-resizer:functional-881514 --alsologtostderr: (3.328038961s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image save gcr.io/google-containers/addon-resizer:functional-881514 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image rm gcr.io/google-containers/addon-resizer:functional-881514 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-881514 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.071614999s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-881514
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-881514 image save --daemon gcr.io/google-containers/addon-resizer:functional-881514 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-881514
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-881514
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-881514
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-881514
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-309766 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-309766 --driver=docker  --container-runtime=docker: (31.071368585s)
--- PASS: TestImageBuild/serial/Setup (31.07s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-309766
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-309766: (1.703891994s)
--- PASS: TestImageBuild/serial/NormalBuild (1.70s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-309766
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.89s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-309766
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-309766
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (105.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-310121 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1221 18:13:56.284045    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-310121 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m45.393080055s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (105.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-310121 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-310121 addons enable ingress --alsologtostderr -v=5: (11.434903262s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.68s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-310121 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-050039 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E1221 18:17:03.468970    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:17:03.474245    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:17:03.484492    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:17:03.504761    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:17:03.544999    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:17:03.625253    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:17:03.785582    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:17:04.106038    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:17:04.746875    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:17:06.027508    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:17:08.588321    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:17:13.709299    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:17:23.949467    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:17:44.429719    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-050039 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m28.819554275s)
--- PASS: TestJSONOutput/start/Command (88.82s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-050039 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-050039 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-050039 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-050039 --output=json --user=testUser: (10.881665674s)
--- PASS: TestJSONOutput/stop/Command (10.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-533114 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-533114 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.139126ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bbeff97a-dde8-4b1b-841c-40e698a83956","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-533114] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e3acb858-810d-49e5-87d5-fc4319eb1d83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17848"}}
	{"specversion":"1.0","id":"a20b09ea-7823-4924-b690-05551db0811e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ea07cec4-7983-4b43-874d-62eab8edf969","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig"}}
	{"specversion":"1.0","id":"e5cd6f12-ea6d-40f4-94b5-1466ba642177","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube"}}
	{"specversion":"1.0","id":"7f08a4f7-2055-40d7-878c-f1ca8aa0d92c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bf6f5ef3-bfbc-4bc9-b60d-009af3f27894","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0eeafcc6-cd45-4fe4-ba42-0bfff8366dcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-533114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-533114
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-227262 --network=
E1221 18:18:25.389970    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-227262 --network=: (34.058776679s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-227262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-227262
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-227262: (2.157336742s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.24s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-872725 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-872725 --network=bridge: (31.130898893s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-872725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-872725
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-872725: (2.000872491s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.15s)

                                                
                                    
x
+
TestKicExistingNetwork (34.52s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-276760 --network=existing-network
E1221 18:19:47.310259    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-276760 --network=existing-network: (32.326849935s)
helpers_test.go:175: Cleaning up "existing-network-276760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-276760
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-276760: (2.033973894s)
--- PASS: TestKicExistingNetwork (34.52s)

                                                
                                    
x
+
TestKicCustomSubnet (37.08s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-969722 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-969722 --subnet=192.168.60.0/24: (34.894846225s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-969722 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-969722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-969722
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-969722: (2.15790811s)
--- PASS: TestKicCustomSubnet (37.08s)

                                                
                                    
x
+
TestKicStaticIP (37.41s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-693640 --static-ip=192.168.200.200
E1221 18:20:47.310126    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:20:47.315772    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:20:47.326011    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:20:47.346259    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:20:47.386495    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:20:47.466718    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:20:47.627034    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:20:47.947577    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:20:48.588436    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:20:49.868910    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:20:52.429881    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:20:57.550682    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:21:07.791882    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:21:12.431585    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-693640 --static-ip=192.168.200.200: (35.082710184s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-693640 ip
helpers_test.go:175: Cleaning up "static-ip-693640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-693640
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-693640: (2.150929782s)
--- PASS: TestKicStaticIP (37.41s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (74.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-912694 --driver=docker  --container-runtime=docker
E1221 18:21:28.272069    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-912694 --driver=docker  --container-runtime=docker: (32.345181528s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-915049 --driver=docker  --container-runtime=docker
E1221 18:22:03.468979    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:22:09.232835    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:22:31.151264    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-915049 --driver=docker  --container-runtime=docker: (37.032182618s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-912694
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-915049
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-915049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-915049
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-915049: (2.098736776s)
helpers_test.go:175: Cleaning up "first-912694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-912694
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-912694: (2.167502131s)
--- PASS: TestMinikubeProfile (74.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-163905 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-163905 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.243753712s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-163905 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-166039 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-166039 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.044876542s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-166039 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-163905 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-163905 --alsologtostderr -v=5: (1.492072336s)
--- PASS: TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-166039 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-166039
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-166039: (1.22796679s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.42s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-166039
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-166039: (7.424566637s)
--- PASS: TestMountStart/serial/RestartStopped (8.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-166039 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-876828 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1221 18:23:31.153860    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-876828 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m18.631574751s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (42.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-876828 -- rollout status deployment/busybox: (2.772241625s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- exec busybox-5bc68d56bd-f6c5p -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- exec busybox-5bc68d56bd-tl7mq -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- exec busybox-5bc68d56bd-f6c5p -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- exec busybox-5bc68d56bd-tl7mq -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- exec busybox-5bc68d56bd-f6c5p -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- exec busybox-5bc68d56bd-tl7mq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (42.78s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- exec busybox-5bc68d56bd-f6c5p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- exec busybox-5bc68d56bd-f6c5p -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- exec busybox-5bc68d56bd-tl7mq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876828 -- exec busybox-5bc68d56bd-tl7mq -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-876828 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-876828 -v 3 --alsologtostderr: (18.621383201s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.42s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-876828 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 cp testdata/cp-test.txt multinode-876828:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 cp multinode-876828:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3448514023/001/cp-test_multinode-876828.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 cp multinode-876828:/home/docker/cp-test.txt multinode-876828-m02:/home/docker/cp-test_multinode-876828_multinode-876828-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828-m02 "sudo cat /home/docker/cp-test_multinode-876828_multinode-876828-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 cp multinode-876828:/home/docker/cp-test.txt multinode-876828-m03:/home/docker/cp-test_multinode-876828_multinode-876828-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828-m03 "sudo cat /home/docker/cp-test_multinode-876828_multinode-876828-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 cp testdata/cp-test.txt multinode-876828-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 cp multinode-876828-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3448514023/001/cp-test_multinode-876828-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 cp multinode-876828-m02:/home/docker/cp-test.txt multinode-876828:/home/docker/cp-test_multinode-876828-m02_multinode-876828.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828 "sudo cat /home/docker/cp-test_multinode-876828-m02_multinode-876828.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 cp multinode-876828-m02:/home/docker/cp-test.txt multinode-876828-m03:/home/docker/cp-test_multinode-876828-m02_multinode-876828-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828-m03 "sudo cat /home/docker/cp-test_multinode-876828-m02_multinode-876828-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 cp testdata/cp-test.txt multinode-876828-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 cp multinode-876828-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3448514023/001/cp-test_multinode-876828-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 cp multinode-876828-m03:/home/docker/cp-test.txt multinode-876828:/home/docker/cp-test_multinode-876828-m03_multinode-876828.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828 "sudo cat /home/docker/cp-test_multinode-876828-m03_multinode-876828.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 cp multinode-876828-m03:/home/docker/cp-test.txt multinode-876828-m02:/home/docker/cp-test_multinode-876828-m03_multinode-876828-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 ssh -n multinode-876828-m02 "sudo cat /home/docker/cp-test_multinode-876828-m03_multinode-876828-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-876828 node stop m03: (1.241735165s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-876828 status: exit status 7 (570.481027ms)

                                                
                                                
-- stdout --
	multinode-876828
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-876828-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-876828-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-876828 status --alsologtostderr: exit status 7 (584.964999ms)

                                                
                                                
-- stdout --
	multinode-876828
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-876828-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-876828-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 18:25:44.300297  114936 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:25:44.300487  114936 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:25:44.300512  114936 out.go:309] Setting ErrFile to fd 2...
	I1221 18:25:44.300532  114936 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:25:44.300805  114936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
	I1221 18:25:44.301016  114936 out.go:303] Setting JSON to false
	I1221 18:25:44.301167  114936 notify.go:220] Checking for updates...
	I1221 18:25:44.301720  114936 mustload.go:65] Loading cluster: multinode-876828
	I1221 18:25:44.302426  114936 config.go:182] Loaded profile config "multinode-876828": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1221 18:25:44.302452  114936 status.go:255] checking status of multinode-876828 ...
	I1221 18:25:44.303094  114936 cli_runner.go:164] Run: docker container inspect multinode-876828 --format={{.State.Status}}
	I1221 18:25:44.321782  114936 status.go:330] multinode-876828 host status = "Running" (err=<nil>)
	I1221 18:25:44.321804  114936 host.go:66] Checking if "multinode-876828" exists ...
	I1221 18:25:44.322178  114936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-876828
	I1221 18:25:44.341539  114936 host.go:66] Checking if "multinode-876828" exists ...
	I1221 18:25:44.341849  114936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 18:25:44.341899  114936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-876828
	I1221 18:25:44.364561  114936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/multinode-876828/id_rsa Username:docker}
	I1221 18:25:44.465775  114936 ssh_runner.go:195] Run: systemctl --version
	I1221 18:25:44.471514  114936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:25:44.485150  114936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:25:44.566826  114936 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-12-21 18:25:44.55748151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1221 18:25:44.567635  114936 kubeconfig.go:92] found "multinode-876828" server: "https://192.168.58.2:8443"
	I1221 18:25:44.567677  114936 api_server.go:166] Checking apiserver status ...
	I1221 18:25:44.567724  114936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 18:25:44.581058  114936 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2191/cgroup
	I1221 18:25:44.592099  114936 api_server.go:182] apiserver freezer: "7:freezer:/docker/b298d5cef96e1a2fa39b8f5a3ab6afa9b987591fe8a49a835da0ef3e71fae144/kubepods/burstable/pod827bcfc2173c2dc85ebec2594aae71bb/23d1421cb2f43622f603c627e429b4a07253ea630c90fbe55fcfcad545770433"
	I1221 18:25:44.592171  114936 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b298d5cef96e1a2fa39b8f5a3ab6afa9b987591fe8a49a835da0ef3e71fae144/kubepods/burstable/pod827bcfc2173c2dc85ebec2594aae71bb/23d1421cb2f43622f603c627e429b4a07253ea630c90fbe55fcfcad545770433/freezer.state
	I1221 18:25:44.602181  114936 api_server.go:204] freezer state: "THAWED"
	I1221 18:25:44.602209  114936 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1221 18:25:44.611373  114936 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1221 18:25:44.611401  114936 status.go:421] multinode-876828 apiserver status = Running (err=<nil>)
	I1221 18:25:44.611417  114936 status.go:257] multinode-876828 status: &{Name:multinode-876828 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 18:25:44.611439  114936 status.go:255] checking status of multinode-876828-m02 ...
	I1221 18:25:44.611768  114936 cli_runner.go:164] Run: docker container inspect multinode-876828-m02 --format={{.State.Status}}
	I1221 18:25:44.629460  114936 status.go:330] multinode-876828-m02 host status = "Running" (err=<nil>)
	I1221 18:25:44.629482  114936 host.go:66] Checking if "multinode-876828-m02" exists ...
	I1221 18:25:44.629771  114936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-876828-m02
	I1221 18:25:44.650075  114936 host.go:66] Checking if "multinode-876828-m02" exists ...
	I1221 18:25:44.650382  114936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 18:25:44.650420  114936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-876828-m02
	I1221 18:25:44.668297  114936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/17848-2360/.minikube/machines/multinode-876828-m02/id_rsa Username:docker}
	I1221 18:25:44.769518  114936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:25:44.783182  114936 status.go:257] multinode-876828-m02 status: &{Name:multinode-876828-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1221 18:25:44.783212  114936 status.go:255] checking status of multinode-876828-m03 ...
	I1221 18:25:44.783548  114936 cli_runner.go:164] Run: docker container inspect multinode-876828-m03 --format={{.State.Status}}
	I1221 18:25:44.803735  114936 status.go:330] multinode-876828-m03 host status = "Stopped" (err=<nil>)
	I1221 18:25:44.803753  114936 status.go:343] host is not running, skipping remaining checks
	I1221 18:25:44.803761  114936 status.go:257] multinode-876828-m03 status: &{Name:multinode-876828-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (14.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 node start m03 --alsologtostderr
E1221 18:25:47.310716    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-876828 node start m03 --alsologtostderr: (13.508952431s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (14.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (122.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-876828
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-876828
E1221 18:26:12.431454    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:26:14.994618    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-876828: (22.708042759s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-876828 --wait=true -v=8 --alsologtostderr
E1221 18:27:03.469859    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:27:35.485444    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-876828 --wait=true -v=8 --alsologtostderr: (1m39.660012175s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-876828
--- PASS: TestMultiNode/serial/RestartKeepsNodes (122.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-876828 node delete m03: (4.430551846s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-876828 stop: (21.537478001s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-876828 status: exit status 7 (100.609635ms)

                                                
                                                
-- stdout --
	multinode-876828
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-876828-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-876828 status --alsologtostderr: exit status 7 (104.363398ms)

                                                
                                                
-- stdout --
	multinode-876828
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-876828-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 18:28:28.653526  131092 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:28:28.653688  131092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:28:28.653712  131092 out.go:309] Setting ErrFile to fd 2...
	I1221 18:28:28.653733  131092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:28:28.654020  131092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-2360/.minikube/bin
	I1221 18:28:28.654235  131092 out.go:303] Setting JSON to false
	I1221 18:28:28.654329  131092 mustload.go:65] Loading cluster: multinode-876828
	I1221 18:28:28.654372  131092 notify.go:220] Checking for updates...
	I1221 18:28:28.655547  131092 config.go:182] Loaded profile config "multinode-876828": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1221 18:28:28.655591  131092 status.go:255] checking status of multinode-876828 ...
	I1221 18:28:28.656176  131092 cli_runner.go:164] Run: docker container inspect multinode-876828 --format={{.State.Status}}
	I1221 18:28:28.674621  131092 status.go:330] multinode-876828 host status = "Stopped" (err=<nil>)
	I1221 18:28:28.674641  131092 status.go:343] host is not running, skipping remaining checks
	I1221 18:28:28.674649  131092 status.go:257] multinode-876828 status: &{Name:multinode-876828 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 18:28:28.674676  131092 status.go:255] checking status of multinode-876828-m02 ...
	I1221 18:28:28.674989  131092 cli_runner.go:164] Run: docker container inspect multinode-876828-m02 --format={{.State.Status}}
	I1221 18:28:28.693457  131092 status.go:330] multinode-876828-m02 host status = "Stopped" (err=<nil>)
	I1221 18:28:28.693474  131092 status.go:343] host is not running, skipping remaining checks
	I1221 18:28:28.693481  131092 status.go:257] multinode-876828-m02 status: &{Name:multinode-876828-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-876828 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-876828 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m25.725950606s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876828 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.52s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-876828
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-876828-m02 --driver=docker  --container-runtime=docker
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-876828-m02 --driver=docker  --container-runtime=docker: exit status 14 (110.463013ms)

                                                
                                                
-- stdout --
	* [multinode-876828-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-876828-m02' is duplicated with machine name 'multinode-876828-m02' in profile 'multinode-876828'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-876828-m03 --driver=docker  --container-runtime=docker
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-876828-m03 --driver=docker  --container-runtime=docker: (34.409953948s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-876828
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-876828: exit status 80 (397.76012ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-876828
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-876828-m03 already exists in multinode-876828-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-876828-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-876828-m03: (2.287253825s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.27s)

                                                
                                    
x
+
TestPreload (168.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-221842 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E1221 18:30:47.310898    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:31:12.431574    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:32:03.469881    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-221842 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m39.100856451s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-221842 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-221842 image pull gcr.io/k8s-minikube/busybox: (1.412553031s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-221842
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-221842: (10.855286314s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-221842 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-221842 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (54.689520138s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-221842 image list
helpers_test.go:175: Cleaning up "test-preload-221842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-221842
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-221842: (2.121065089s)
--- PASS: TestPreload (168.41s)

                                                
                                    
x
+
TestScheduledStopUnix (106.29s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-749444 --memory=2048 --driver=docker  --container-runtime=docker
E1221 18:33:26.511484    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-749444 --memory=2048 --driver=docker  --container-runtime=docker: (32.838370183s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-749444 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-749444 -n scheduled-stop-749444
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-749444 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-749444 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-749444 -n scheduled-stop-749444
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-749444
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-749444 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-749444
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-749444: exit status 7 (89.269744ms)

                                                
                                                
-- stdout --
	scheduled-stop-749444
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-749444 -n scheduled-stop-749444
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-749444 -n scheduled-stop-749444: exit status 7 (88.573672ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-749444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-749444
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-749444: (1.675880062s)
--- PASS: TestScheduledStopUnix (106.29s)

                                                
                                    
x
+
TestSkaffold (108.36s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3926801240 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-581709 --memory=2600 --driver=docker  --container-runtime=docker
E1221 18:35:47.310484    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-581709 --memory=2600 --driver=docker  --container-runtime=docker: (33.882304299s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3926801240 run --minikube-profile skaffold-581709 --kube-context skaffold-581709 --status-check=true --port-forward=false --interactive=false
E1221 18:36:12.431620    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3926801240 run --minikube-profile skaffold-581709 --kube-context skaffold-581709 --status-check=true --port-forward=false --interactive=false: (58.554220417s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-55ccbbbdb-4rwwx" [6231d845-347c-4cec-b951-6fcf9eda7381] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003298575s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-db55b99bd-kj8rt" [2084ac94-e664-4750-8a90-e9a941c290d2] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003445301s
helpers_test.go:175: Cleaning up "skaffold-581709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-581709
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-581709: (2.948886391s)
--- PASS: TestSkaffold (108.36s)

                                                
                                    
x
+
TestInsufficientStorage (14.06s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-969953 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
E1221 18:37:03.470021    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:37:10.355488    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-969953 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.685297805s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f4b3dc57-670e-4e3a-aaf8-f4601bc39ba1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-969953] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a337934a-9ed5-4956-8c8f-86118a673095","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17848"}}
	{"specversion":"1.0","id":"62fb7816-3e30-438f-a6fc-c7d627a91bfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"19783601-2700-4587-914f-73e3827da7b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig"}}
	{"specversion":"1.0","id":"04be65a0-5900-4548-8b42-257f2b38985c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube"}}
	{"specversion":"1.0","id":"ab9218e4-d5c8-4ade-90ca-a47ccf455c9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e94e616e-ae97-42f4-99c8-cc90c05787ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f29e0d3a-f4fa-453f-b97d-85e17b119406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"2962c6d3-ebb0-47b8-8ed4-41580913158a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c12f29e8-068b-40e6-8cfb-6ea6a84320ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2fb990b-55b8-4c41-9b5c-38cac4afacc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"16d227cb-eb58-4906-a8bc-9e33cb090e2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-969953 in cluster insufficient-storage-969953","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"69677445-653f-4358-8fc2-c28a79d726d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1702920864-17822 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"565d3a7d-23a3-45c5-8842-a1454e431494","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7f3e018-7646-4dc4-a556-1d37fa7e68ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-969953 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-969953 --output=json --layout=cluster: exit status 7 (336.909066ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-969953","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-969953","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1221 18:37:12.169952  167133 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-969953" does not appear in /home/jenkins/minikube-integration/17848-2360/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-969953 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-969953 --output=json --layout=cluster: exit status 7 (330.465001ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-969953","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-969953","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1221 18:37:12.502603  167186 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-969953" does not appear in /home/jenkins/minikube-integration/17848-2360/kubeconfig
	E1221 18:37:12.514481  167186 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/insufficient-storage-969953/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-969953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-969953
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-969953: (1.707863302s)
--- PASS: TestInsufficientStorage (14.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (97.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.7076642.exe start -p running-upgrade-569424 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.7076642.exe start -p running-upgrade-569424 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m1.219460108s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-569424 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-569424 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.019572206s)
helpers_test.go:175: Cleaning up "running-upgrade-569424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-569424
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-569424: (2.188783042s)
--- PASS: TestRunningBinaryUpgrade (97.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (141.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-555316 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-555316 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m4.254826008s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-555316
E1221 18:40:47.312759    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-555316: (4.771316643s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-555316 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-555316 status --format={{.Host}}: exit status 7 (97.097568ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-555316 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-555316 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.175027161s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-555316 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-555316 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-555316 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (95.590461ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-555316] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-555316
	    minikube start -p kubernetes-upgrade-555316 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5553162 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-555316 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-555316 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1221 18:41:46.194939    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 18:41:46.200168    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 18:41:46.210410    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 18:41:46.230724    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 18:41:46.271432    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 18:41:46.352087    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 18:41:46.512399    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 18:41:46.833199    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 18:41:47.473610    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 18:41:48.753801    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 18:41:51.313936    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 18:41:56.434909    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-555316 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (43.16360544s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-555316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-555316
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-555316: (2.887013533s)
--- PASS: TestKubernetesUpgrade (141.55s)

                                                
                                    
x
+
TestMissingContainerUpgrade (198.82s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.2479512496.exe start -p missing-upgrade-464187 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.2479512496.exe start -p missing-upgrade-464187 --memory=2200 --driver=docker  --container-runtime=docker: (1m56.893104573s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-464187
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-464187: (10.378030459s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-464187
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-464187 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1221 18:41:12.431724    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-464187 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m7.81113472s)
helpers_test.go:175: Cleaning up "missing-upgrade-464187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-464187
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-464187: (2.457753805s)
--- PASS: TestMissingContainerUpgrade (198.82s)

                                                
                                    
x
+
TestPause/serial/Start (96.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-355773 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-355773 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m36.967180332s)
--- PASS: TestPause/serial/Start (96.97s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-355773 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-355773 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (40.630656069s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.66s)

                                                
                                    
x
+
TestPause/serial/Pause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-355773 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-355773 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-355773 --output=json --layout=cluster: exit status 2 (483.974496ms)

                                                
                                                
-- stdout --
	{"Name":"pause-355773","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-355773","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.48s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-355773 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.27s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-355773 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-355773 --alsologtostderr -v=5: (1.266167304s)
--- PASS: TestPause/serial/PauseAgain (1.27s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.35s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-355773 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-355773 --alsologtostderr -v=5: (3.351321055s)
--- PASS: TestPause/serial/DeletePaused (3.35s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-355773
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-355773: exit status 1 (25.43251ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-355773: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-039433 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-039433 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (92.038319ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-039433] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17848-2360/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-2360/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-039433 --driver=docker  --container-runtime=docker
E1221 18:44:30.038096    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-039433 --driver=docker  --container-runtime=docker: (33.538347749s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-039433 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-039433 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-039433 --no-kubernetes --driver=docker  --container-runtime=docker: (14.988318407s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-039433 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-039433 status -o json: exit status 2 (345.212438ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-039433","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-039433
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-039433: (1.783122625s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-039433 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-039433 --no-kubernetes --driver=docker  --container-runtime=docker: (8.057175427s)
--- PASS: TestNoKubernetes/serial/Start (8.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-039433 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-039433 "sudo systemctl is-active --quiet service kubelet": exit status 1 (339.81869ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-039433
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-039433: (1.226480035s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-039433 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-039433 --driver=docker  --container-runtime=docker: (7.847458137s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-039433 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-039433 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.020723ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (137.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-946821 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-946821 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m17.942270589s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (137.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-699377 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1221 18:50:47.310426    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:51:12.430952    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-699377 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (59.686588608s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-699377 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5e23dfc1-8175-4e48-b55b-d112b128f30a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5e23dfc1-8175-4e48-b55b-d112b128f30a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004222221s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-699377 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-699377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-699377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.048379964s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-699377 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-699377 --alsologtostderr -v=3
E1221 18:51:46.194323    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-699377 --alsologtostderr -v=3: (11.016386451s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-699377 -n default-k8s-diff-port-699377
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-699377 -n default-k8s-diff-port-699377: exit status 7 (84.155189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-699377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (568.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-699377 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1221 18:52:03.470017    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-699377 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (9m28.072202798s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-699377 -n default-k8s-diff-port-699377
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (568.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-946821 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6bf01515-5f47-4016-a628-bb32899fce22] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6bf01515-5f47-4016-a628-bb32899fce22] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003012352s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-946821 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-946821 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-946821 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-946821 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-946821 --alsologtostderr -v=3: (10.883791864s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-946821 -n old-k8s-version-946821
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-946821 -n old-k8s-version-946821: exit status 7 (91.150544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-946821 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (417.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-946821 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E1221 18:53:50.355624    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:55:47.310925    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 18:56:12.430731    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 18:56:46.194614    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 18:57:03.469048    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 18:58:09.239620    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-946821 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (6m57.03154447s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-946821 -n old-k8s-version-946821
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (417.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-c78l5" [9cc6586b-b65e-4c4b-bce5-de52684dbe55] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003526583s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-c78l5" [9cc6586b-b65e-4c4b-bce5-de52684dbe55] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002996874s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-946821 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-946821 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-946821 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-946821 -n old-k8s-version-946821
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-946821 -n old-k8s-version-946821: exit status 2 (411.438674ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-946821 -n old-k8s-version-946821
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-946821 -n old-k8s-version-946821: exit status 2 (363.323168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-946821 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-946821 -n old-k8s-version-946821
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-946821 -n old-k8s-version-946821
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-785522 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1221 19:00:47.311088    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 19:00:55.486166    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-785522 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (54.520996014s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-785522 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8e89b9d4-d213-4089-b154-9148db99d457] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8e89b9d4-d213-4089-b154-9148db99d457] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003820169s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-785522 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-785522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-785522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.012913088s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-785522 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-785522 --alsologtostderr -v=3
E1221 19:01:12.431012    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-785522 --alsologtostderr -v=3: (11.01334339s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xd8hd" [976411ea-ec15-4d09-bdb9-a7e6d89a08e2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004403065s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-785522 -n embed-certs-785522
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-785522 -n embed-certs-785522: exit status 7 (91.369868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-785522 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xd8hd" [976411ea-ec15-4d09-bdb9-a7e6d89a08e2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004523776s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-699377 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (354.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-785522 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-785522 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (5m53.995464218s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-785522 -n embed-certs-785522
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (354.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-699377 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-699377 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-699377 -n default-k8s-diff-port-699377
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-699377 -n default-k8s-diff-port-699377: exit status 2 (511.783213ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-699377 -n default-k8s-diff-port-699377
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-699377 -n default-k8s-diff-port-699377: exit status 2 (499.128193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-699377 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-699377 -n default-k8s-diff-port-699377
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-699377 -n default-k8s-diff-port-699377
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (59.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-471770 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E1221 19:01:46.194273    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 19:02:03.469163    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 19:02:31.386301    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
E1221 19:02:31.391468    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
E1221 19:02:31.401635    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
E1221 19:02:31.421891    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
E1221 19:02:31.462078    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
E1221 19:02:31.542388    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
E1221 19:02:31.702789    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
E1221 19:02:32.023295    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
E1221 19:02:32.663485    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
E1221 19:02:33.944574    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-471770 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (59.479870683s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (59.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-471770 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6d935731-1449-4816-9d87-8b6716625878] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1221 19:02:36.504803    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
helpers_test.go:344: "busybox" [6d935731-1449-4816-9d87-8b6716625878] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.006280164s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-471770 exec busybox -- /bin/sh -c "ulimit -n"
E1221 19:02:41.625834    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-471770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-471770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.114404892s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-471770 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-471770 --alsologtostderr -v=3
E1221 19:02:51.866867    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-471770 --alsologtostderr -v=3: (10.977992592s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-471770 -n no-preload-471770
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-471770 -n no-preload-471770: exit status 7 (87.900046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-471770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (346.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-471770 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E1221 19:03:12.347844    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
E1221 19:03:53.308689    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
E1221 19:05:15.229762    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
E1221 19:05:47.311012    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 19:06:12.431617    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 19:06:26.413253    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:06:26.418509    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:06:26.428752    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:06:26.449003    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:06:26.489272    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:06:26.570364    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:06:26.730769    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:06:27.051080    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:06:27.691762    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:06:28.971974    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:06:31.532549    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:06:36.652744    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:06:46.193972    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 19:06:46.512499    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 19:06:46.893080    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:07:03.469151    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 19:07:07.373890    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-471770 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (5m45.575558217s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-471770 -n no-preload-471770
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (346.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-r74xd" [a988b454-2590-4b39-b507-6c1191e46fa3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-r74xd" [a988b454-2590-4b39-b507-6c1191e46fa3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.003854878s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-r74xd" [a988b454-2590-4b39-b507-6c1191e46fa3] Running
E1221 19:07:31.385986    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003585036s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-785522 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-785522 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-785522 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-785522 -n embed-certs-785522
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-785522 -n embed-certs-785522: exit status 2 (393.871344ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-785522 -n embed-certs-785522
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-785522 -n embed-certs-785522: exit status 2 (386.802286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-785522 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-785522 -n embed-certs-785522
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-785522 -n embed-certs-785522
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-729173 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E1221 19:07:48.334054    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:07:59.070179    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-729173 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (48.995555453s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-729173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-729173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.126800493s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-729173 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-729173 --alsologtostderr -v=3: (11.123894415s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xf79r" [2d6029fb-17ef-4851-bc7a-af006e3f5d79] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xf79r" [2d6029fb-17ef-4851-bc7a-af006e3f5d79] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.003926885s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-729173 -n newest-cni-729173
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-729173 -n newest-cni-729173: exit status 7 (176.985821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-729173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-729173 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-729173 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (39.158939361s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-729173 -n newest-cni-729173
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xf79r" [2d6029fb-17ef-4851-bc7a-af006e3f5d79] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003847336s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-471770 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-471770 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-471770 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-471770 -n no-preload-471770
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-471770 -n no-preload-471770: exit status 2 (363.993515ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-471770 -n no-preload-471770
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-471770 -n no-preload-471770: exit status 2 (374.219896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-471770 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-471770 -n no-preload-471770
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-471770 -n no-preload-471770
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (96.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E1221 19:09:10.255434    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m36.237039497s)
--- PASS: TestNetworkPlugins/group/auto/Start (96.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-729173 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-729173 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-729173 -n newest-cni-729173
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-729173 -n newest-cni-729173: exit status 2 (430.968988ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-729173 -n newest-cni-729173
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-729173 -n newest-cni-729173: exit status 2 (411.112732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-729173 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-729173 -n newest-cni-729173
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-729173 -n newest-cni-729173
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.03s)
E1221 19:17:34.562151    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
E1221 19:17:35.487116    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 19:17:36.159137    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:17:36.164297    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:17:36.174513    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:17:36.195130    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:17:36.235361    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:17:36.315609    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:17:36.476079    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:17:36.797128    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:17:37.437628    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:17:38.718048    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:17:41.279065    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:17:46.399551    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:17:56.640678    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:17:57.944761    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
E1221 19:17:57.950019    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
E1221 19:17:57.960268    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
E1221 19:17:57.980504    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
E1221 19:17:58.020831    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
E1221 19:17:58.101109    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
E1221 19:17:58.261539    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
E1221 19:17:58.581790    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
E1221 19:17:59.222714    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
E1221 19:18:00.503519    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
E1221 19:18:02.247143    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
E1221 19:18:03.064392    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
E1221 19:18:08.184522    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
E1221 19:18:17.121611    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/custom-flannel-129117/client.crt: no such file or directory
E1221 19:18:18.424718    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
E1221 19:18:19.812447    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:18:25.365901    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E1221 19:10:30.355822    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m7.683113857s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-94mqt" [bfc090fa-0ed2-4df8-b657-fdda8dba09ab] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003794743s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-129117 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-129117 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ntgqf" [bfb55c2c-6352-4a04-9b93-60571ead5620] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ntgqf" [bfb55c2c-6352-4a04-9b93-60571ead5620] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004951052s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-129117 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-129117 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vx6d8" [b0154534-e557-4f87-bcfa-74881774afd0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1221 19:10:47.310418    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-vx6d8" [b0154534-e557-4f87-bcfa-74881774afd0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006884872s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-129117 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-129117 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (96.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m36.293511704s)
--- PASS: TestNetworkPlugins/group/calico/Start (96.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E1221 19:11:26.413272    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:11:46.194257    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
E1221 19:11:54.096497    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
E1221 19:12:03.469600    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
E1221 19:12:31.386260    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/old-k8s-version-946821/client.crt: no such file or directory
E1221 19:12:34.562470    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
E1221 19:12:34.567935    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
E1221 19:12:34.578085    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
E1221 19:12:34.598246    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
E1221 19:12:34.639065    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
E1221 19:12:34.719300    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
E1221 19:12:34.879528    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m13.361662468s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-129117 "pgrep -a kubelet"
E1221 19:12:35.202167    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-129117 replace --force -f testdata/netcat-deployment.yaml
E1221 19:12:35.842375    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-np6zd" [95345179-0195-4f2c-9e6c-353a16d3844c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1221 19:12:37.122580    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
E1221 19:12:39.683551    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-np6zd" [95345179-0195-4f2c-9e6c-353a16d3844c] Running
E1221 19:12:44.804205    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004949751s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-129117 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-v5pht" [f9782a14-c4ff-456a-8b6f-5194f9235689] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005735469s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-129117 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-129117 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bjj8t" [2146bbec-81d6-4293-828c-bf0c128c2e54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bjj8t" [2146bbec-81d6-4293-828c-bf0c128c2e54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004232526s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (92.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E1221 19:13:15.525277    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m32.505950763s)
--- PASS: TestNetworkPlugins/group/false/Start (92.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-129117 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E1221 19:13:56.485553    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/no-preload-471770/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m4.850412041s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-129117 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-129117 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sxljk" [8507aed2-82a5-4b0d-8c06-67094e4d4f02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1221 19:14:49.240696    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/skaffold-581709/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-sxljk" [8507aed2-82a5-4b0d-8c06-67094e4d4f02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003383763s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lcv2x" [4cfc8964-7b47-4d60-8a44-9f44f44bac06] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004668751s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-129117 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-129117 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-129117 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4hbt9" [7b242b52-e3e1-4850-80db-854e3c07f1cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4hbt9" [7b242b52-e3e1-4850-80db-854e3c07f1cf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004979489s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-129117 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (94.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m34.159180965s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (94.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (54.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1221 19:15:35.968190    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:15:35.973443    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:15:35.983989    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:15:36.004308    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:15:36.044560    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:15:36.125681    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:15:36.286478    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:15:36.606993    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:15:37.247448    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:15:38.527632    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:15:41.088408    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:15:41.524079    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:15:41.529324    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:15:41.539566    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:15:41.559809    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:15:41.600047    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:15:41.680289    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:15:41.840606    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:15:42.161072    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:15:42.801738    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:15:44.081919    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:15:46.209059    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:15:46.642261    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:15:47.311044    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/ingress-addon-legacy-310121/client.crt: no such file or directory
E1221 19:15:51.762445    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:15:56.449603    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:16:02.003288    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:16:12.430901    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/addons-203484/client.crt: no such file or directory
E1221 19:16:16.930727    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
E1221 19:16:22.483898    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:16:26.413411    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/default-k8s-diff-port-699377/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (54.249844581s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (54.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-129117 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-129117 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qhhrl" [6a9136c8-55d9-4431-96b0-3de2362c85f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qhhrl" [6a9136c8-55d9-4431-96b0-3de2362c85f1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004018488s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-129117 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-129117 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-129117 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9wfdg" [02c5bab9-66f5-45e2-ac82-b34f67bc63f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1221 19:16:57.891860    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/flannel-129117/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-9wfdg" [02c5bab9-66f5-45e2-ac82-b34f67bc63f0] Running
E1221 19:17:03.445011    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/auto-129117/client.crt: no such file or directory
E1221 19:17:03.469209    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/functional-881514/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003580485s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (92.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-129117 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m32.543586659s)
--- PASS: TestNetworkPlugins/group/bridge/Start (92.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-129117 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-129117 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-129117 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7q8rr" [b31799af-7e99-454d-9e6b-7eb7d338a806] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1221 19:18:38.905002    7660 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-2360/.minikube/profiles/calico-129117/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-7q8rr" [b31799af-7e99-454d-9e6b-7eb7d338a806] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003327747s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-129117 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-129117 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (27/331)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.62s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-048821 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-048821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-048821
--- SKIP: TestDownloadOnlyKic (0.62s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-922305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-922305
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-129117 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-129117" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-129117

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-129117" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129117"

                                                
                                                
----------------------- debugLogs end: cilium-129117 [took: 4.46591823s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-129117" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-129117
--- SKIP: TestNetworkPlugins/group/cilium (4.66s)

                                                
                                    
Copied to clipboard