Test Report: Docker_Linux_docker_arm64 17740

                    
                      6db73b2c9af5fe00de7b62f5c00df582e8611f1d:2023-12-06:32175
                    
                

Test fail (3/330)

Order failed test Duration
35 TestAddons/parallel/Ingress 38.92
174 TestIngressAddonLegacy/serial/ValidateIngressAddons 56.83
249 TestMissingContainerUpgrade 487.73
x
+
TestAddons/parallel/Ingress (38.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-440984 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-440984 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-440984 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4487741a-840e-4cbf-bb20-7f880ca9e7fc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4487741a-840e-4cbf-bb20-7f880ca9e7fc] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.011719787s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-440984 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-440984 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-440984 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.057948753s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-440984 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-440984 addons disable ingress-dns --alsologtostderr -v=1: (1.298250126s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-440984 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-440984 addons disable ingress --alsologtostderr -v=1: (7.901330351s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-440984
helpers_test.go:235: (dbg) docker inspect addons-440984:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "896f804ac9c1bd9266da9aea1c546682301d6d4800e647a56d528c38557be89d",
	        "Created": "2023-12-06T18:59:40.970867084Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 245860,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-06T18:59:41.325168819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4e0f3cc6f04c458835e9edb05d52f031520d40521bc3568d81cbb7c06a79ef2",
	        "ResolvConfPath": "/var/lib/docker/containers/896f804ac9c1bd9266da9aea1c546682301d6d4800e647a56d528c38557be89d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/896f804ac9c1bd9266da9aea1c546682301d6d4800e647a56d528c38557be89d/hostname",
	        "HostsPath": "/var/lib/docker/containers/896f804ac9c1bd9266da9aea1c546682301d6d4800e647a56d528c38557be89d/hosts",
	        "LogPath": "/var/lib/docker/containers/896f804ac9c1bd9266da9aea1c546682301d6d4800e647a56d528c38557be89d/896f804ac9c1bd9266da9aea1c546682301d6d4800e647a56d528c38557be89d-json.log",
	        "Name": "/addons-440984",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-440984:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-440984",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/77b32e26410023820a8cfb8ab3767665e662466f7b9bdd3022242f3bebc5327a-init/diff:/var/lib/docker/overlay2/3961c608fd2e546f17711d7abfbc6ea02272979b18f6f84671d9084e2cf5bd05/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77b32e26410023820a8cfb8ab3767665e662466f7b9bdd3022242f3bebc5327a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77b32e26410023820a8cfb8ab3767665e662466f7b9bdd3022242f3bebc5327a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77b32e26410023820a8cfb8ab3767665e662466f7b9bdd3022242f3bebc5327a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-440984",
	                "Source": "/var/lib/docker/volumes/addons-440984/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-440984",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-440984",
	                "name.minikube.sigs.k8s.io": "addons-440984",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bd418585fe91c06ae16f4e4fd2ac98ec9eb4350e4fa0163853caf822a2142fe",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4bd418585fe9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-440984": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "896f804ac9c1",
	                        "addons-440984"
	                    ],
	                    "NetworkID": "04058a0dfc0ab2ab17393de7a426478433a728e8d3f13cc46ccb4c520084e7e3",
	                    "EndpointID": "42b7cc5d8bcab5f5ea7b0a955cac367387ff75cc7e32c48d3936c3ef28747052",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-440984 -n addons-440984
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-440984 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-440984 logs -n 25: (2.033816456s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-942237   | jenkins | v1.32.0 | 06 Dec 23 18:58 UTC |                     |
	|         | -p download-only-942237              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-942237   | jenkins | v1.32.0 | 06 Dec 23 18:58 UTC |                     |
	|         | -p download-only-942237              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-942237   | jenkins | v1.32.0 | 06 Dec 23 18:59 UTC |                     |
	|         | -p download-only-942237              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1    |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 06 Dec 23 18:59 UTC | 06 Dec 23 18:59 UTC |
	| delete  | -p download-only-942237              | download-only-942237   | jenkins | v1.32.0 | 06 Dec 23 18:59 UTC | 06 Dec 23 18:59 UTC |
	| delete  | -p download-only-942237              | download-only-942237   | jenkins | v1.32.0 | 06 Dec 23 18:59 UTC | 06 Dec 23 18:59 UTC |
	| start   | --download-only -p                   | download-docker-553370 | jenkins | v1.32.0 | 06 Dec 23 18:59 UTC |                     |
	|         | download-docker-553370               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-553370            | download-docker-553370 | jenkins | v1.32.0 | 06 Dec 23 18:59 UTC | 06 Dec 23 18:59 UTC |
	| start   | --download-only -p                   | binary-mirror-136381   | jenkins | v1.32.0 | 06 Dec 23 18:59 UTC |                     |
	|         | binary-mirror-136381                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38321               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-136381              | binary-mirror-136381   | jenkins | v1.32.0 | 06 Dec 23 18:59 UTC | 06 Dec 23 18:59 UTC |
	| addons  | disable dashboard -p                 | addons-440984          | jenkins | v1.32.0 | 06 Dec 23 18:59 UTC |                     |
	|         | addons-440984                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-440984          | jenkins | v1.32.0 | 06 Dec 23 18:59 UTC |                     |
	|         | addons-440984                        |                        |         |         |                     |                     |
	| start   | -p addons-440984 --wait=true         | addons-440984          | jenkins | v1.32.0 | 06 Dec 23 18:59 UTC | 06 Dec 23 19:01 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-440984 ip                     | addons-440984          | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	| addons  | addons-440984 addons disable         | addons-440984          | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-440984 addons                 | addons-440984          | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-440984          | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	|         | addons-440984                        |                        |         |         |                     |                     |
	| ssh     | addons-440984 ssh curl -s            | addons-440984          | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-440984 ip                     | addons-440984          | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	| addons  | addons-440984 addons                 | addons-440984          | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-440984 addons                 | addons-440984          | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-440984 addons disable         | addons-440984          | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-440984 addons disable         | addons-440984          | jenkins | v1.32.0 | 06 Dec 23 19:02 UTC | 06 Dec 23 19:02 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:59:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:59:17.216145  245404 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:59:17.216383  245404 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:59:17.216396  245404 out.go:309] Setting ErrFile to fd 2...
	I1206 18:59:17.216403  245404 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:59:17.216706  245404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
	I1206 18:59:17.217194  245404 out.go:303] Setting JSON to false
	I1206 18:59:17.218398  245404 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6104,"bootTime":1701883054,"procs":400,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1206 18:59:17.218476  245404 start.go:138] virtualization:  
	I1206 18:59:17.221379  245404 out.go:177] * [addons-440984] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1206 18:59:17.223451  245404 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 18:59:17.225756  245404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:59:17.223543  245404 notify.go:220] Checking for updates...
	I1206 18:59:17.228416  245404 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	I1206 18:59:17.230884  245404 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	I1206 18:59:17.233325  245404 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 18:59:17.235432  245404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:59:17.237878  245404 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:59:17.262021  245404 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:59:17.262140  245404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:59:17.346541  245404 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-06 18:59:17.336725125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1206 18:59:17.346646  245404 docker.go:295] overlay module found
	I1206 18:59:17.349583  245404 out.go:177] * Using the docker driver based on user configuration
	I1206 18:59:17.351944  245404 start.go:298] selected driver: docker
	I1206 18:59:17.351967  245404 start.go:902] validating driver "docker" against <nil>
	I1206 18:59:17.351981  245404 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:59:17.352725  245404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:59:17.422276  245404 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-06 18:59:17.412106854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1206 18:59:17.422438  245404 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1206 18:59:17.422661  245404 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 18:59:17.424823  245404 out.go:177] * Using Docker driver with root privileges
	I1206 18:59:17.427311  245404 cni.go:84] Creating CNI manager for ""
	I1206 18:59:17.427341  245404 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 18:59:17.427353  245404 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 18:59:17.427368  245404 start_flags.go:323] config:
	{Name:addons-440984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-440984 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:59:17.429646  245404 out.go:177] * Starting control plane node addons-440984 in cluster addons-440984
	I1206 18:59:17.431885  245404 cache.go:121] Beginning downloading kic base image for docker with docker
	I1206 18:59:17.433997  245404 out.go:177] * Pulling base image ...
	I1206 18:59:17.436545  245404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1206 18:59:17.436595  245404 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1206 18:59:17.436604  245404 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1206 18:59:17.436674  245404 cache.go:56] Caching tarball of preloaded images
	I1206 18:59:17.436759  245404 preload.go:174] Found /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1206 18:59:17.436770  245404 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1206 18:59:17.437130  245404 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/config.json ...
	I1206 18:59:17.437151  245404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/config.json: {Name:mk2343f7abd5191bb0b66897c4900cc0e0df4aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:59:17.457217  245404 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1206 18:59:17.457364  245404 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1206 18:59:17.457390  245404 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory, skipping pull
	I1206 18:59:17.457396  245404 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in cache, skipping pull
	I1206 18:59:17.457407  245404 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f as a tarball
	I1206 18:59:17.457416  245404 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f from local cache
	I1206 18:59:33.673219  245404 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f from cached tarball
	I1206 18:59:33.673268  245404 cache.go:194] Successfully downloaded all kic artifacts
	I1206 18:59:33.673323  245404 start.go:365] acquiring machines lock for addons-440984: {Name:mk09babeea1c0bc1a196ef32b910c28740dd8891 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:59:33.673453  245404 start.go:369] acquired machines lock for "addons-440984" in 107.339µs
	I1206 18:59:33.673484  245404 start.go:93] Provisioning new machine with config: &{Name:addons-440984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-440984 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1206 18:59:33.673561  245404 start.go:125] createHost starting for "" (driver="docker")
	I1206 18:59:33.676224  245404 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1206 18:59:33.676499  245404 start.go:159] libmachine.API.Create for "addons-440984" (driver="docker")
	I1206 18:59:33.676537  245404 client.go:168] LocalClient.Create starting
	I1206 18:59:33.676641  245404 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem
	I1206 18:59:34.620973  245404 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/cert.pem
	I1206 18:59:34.793971  245404 cli_runner.go:164] Run: docker network inspect addons-440984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 18:59:34.811494  245404 cli_runner.go:211] docker network inspect addons-440984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 18:59:34.811576  245404 network_create.go:281] running [docker network inspect addons-440984] to gather additional debugging logs...
	I1206 18:59:34.811597  245404 cli_runner.go:164] Run: docker network inspect addons-440984
	W1206 18:59:34.829745  245404 cli_runner.go:211] docker network inspect addons-440984 returned with exit code 1
	I1206 18:59:34.829777  245404 network_create.go:284] error running [docker network inspect addons-440984]: docker network inspect addons-440984: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-440984 not found
	I1206 18:59:34.829790  245404 network_create.go:286] output of [docker network inspect addons-440984]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-440984 not found
	
	** /stderr **
	I1206 18:59:34.829887  245404 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 18:59:34.847809  245404 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002bd1340}
	I1206 18:59:34.847851  245404 network_create.go:124] attempt to create docker network addons-440984 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 18:59:34.847924  245404 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-440984 addons-440984
	I1206 18:59:34.924019  245404 network_create.go:108] docker network addons-440984 192.168.49.0/24 created
	I1206 18:59:34.924052  245404 kic.go:121] calculated static IP "192.168.49.2" for the "addons-440984" container
	I1206 18:59:34.924125  245404 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 18:59:34.941469  245404 cli_runner.go:164] Run: docker volume create addons-440984 --label name.minikube.sigs.k8s.io=addons-440984 --label created_by.minikube.sigs.k8s.io=true
	I1206 18:59:34.959706  245404 oci.go:103] Successfully created a docker volume addons-440984
	I1206 18:59:34.959798  245404 cli_runner.go:164] Run: docker run --rm --name addons-440984-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-440984 --entrypoint /usr/bin/test -v addons-440984:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib
	I1206 18:59:36.867761  245404 cli_runner.go:217] Completed: docker run --rm --name addons-440984-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-440984 --entrypoint /usr/bin/test -v addons-440984:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib: (1.907922991s)
	I1206 18:59:36.867790  245404 oci.go:107] Successfully prepared a docker volume addons-440984
	I1206 18:59:36.867828  245404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1206 18:59:36.867851  245404 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 18:59:36.867936  245404 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-440984:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 18:59:40.889902  245404 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-440984:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir: (4.021924522s)
	I1206 18:59:40.889932  245404 kic.go:203] duration metric: took 4.022077 seconds to extract preloaded images to volume
	W1206 18:59:40.890090  245404 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 18:59:40.890200  245404 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 18:59:40.954954  245404 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-440984 --name addons-440984 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-440984 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-440984 --network addons-440984 --ip 192.168.49.2 --volume addons-440984:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1206 18:59:41.335465  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Running}}
	I1206 18:59:41.360797  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 18:59:41.384781  245404 cli_runner.go:164] Run: docker exec addons-440984 stat /var/lib/dpkg/alternatives/iptables
	I1206 18:59:41.468330  245404 oci.go:144] the created container "addons-440984" has a running status.
	I1206 18:59:41.468359  245404 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa...
	I1206 18:59:42.286789  245404 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 18:59:42.313255  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 18:59:42.366114  245404 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 18:59:42.366138  245404 kic_runner.go:114] Args: [docker exec --privileged addons-440984 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 18:59:42.465682  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 18:59:42.495134  245404 machine.go:88] provisioning docker machine ...
	I1206 18:59:42.495347  245404 ubuntu.go:169] provisioning hostname "addons-440984"
	I1206 18:59:42.495427  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 18:59:42.521705  245404 main.go:141] libmachine: Using SSH client type: native
	I1206 18:59:42.522134  245404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1206 18:59:42.522148  245404 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-440984 && echo "addons-440984" | sudo tee /etc/hostname
	I1206 18:59:42.701572  245404 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-440984
	
	I1206 18:59:42.701747  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 18:59:42.725559  245404 main.go:141] libmachine: Using SSH client type: native
	I1206 18:59:42.725966  245404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1206 18:59:42.725993  245404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-440984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-440984/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-440984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 18:59:42.885624  245404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 18:59:42.885652  245404 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17740-239434/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-239434/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-239434/.minikube}
	I1206 18:59:42.885671  245404 ubuntu.go:177] setting up certificates
	I1206 18:59:42.885682  245404 provision.go:83] configureAuth start
	I1206 18:59:42.885743  245404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-440984
	I1206 18:59:42.903211  245404 provision.go:138] copyHostCerts
	I1206 18:59:42.903305  245404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-239434/.minikube/cert.pem (1123 bytes)
	I1206 18:59:42.903440  245404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-239434/.minikube/key.pem (1679 bytes)
	I1206 18:59:42.903517  245404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-239434/.minikube/ca.pem (1078 bytes)
	I1206 18:59:42.903576  245404 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-239434/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca-key.pem org=jenkins.addons-440984 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-440984]
	I1206 18:59:43.341651  245404 provision.go:172] copyRemoteCerts
	I1206 18:59:43.341741  245404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 18:59:43.341787  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 18:59:43.359898  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 18:59:43.468509  245404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 18:59:43.498855  245404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1206 18:59:43.527991  245404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 18:59:43.556661  245404 provision.go:86] duration metric: configureAuth took 670.963547ms
	I1206 18:59:43.556693  245404 ubuntu.go:193] setting minikube options for container-runtime
	I1206 18:59:43.556896  245404 config.go:182] Loaded profile config "addons-440984": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1206 18:59:43.556957  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 18:59:43.575013  245404 main.go:141] libmachine: Using SSH client type: native
	I1206 18:59:43.575466  245404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1206 18:59:43.575485  245404 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1206 18:59:43.726375  245404 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1206 18:59:43.726407  245404 ubuntu.go:71] root file system type: overlay
	I1206 18:59:43.726516  245404 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1206 18:59:43.726592  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 18:59:43.745366  245404 main.go:141] libmachine: Using SSH client type: native
	I1206 18:59:43.745783  245404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1206 18:59:43.745862  245404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1206 18:59:43.915736  245404 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1206 18:59:43.915827  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 18:59:43.938586  245404 main.go:141] libmachine: Using SSH client type: native
	I1206 18:59:43.939008  245404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1206 18:59:43.939030  245404 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1206 18:59:44.792203  245404 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-12-06 18:59:43.910691714 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1206 18:59:44.792238  245404 machine.go:91] provisioned docker machine in 2.296915069s
	I1206 18:59:44.792250  245404 client.go:171] LocalClient.Create took 11.115706577s
	I1206 18:59:44.792266  245404 start.go:167] duration metric: libmachine.API.Create for "addons-440984" took 11.115765006s
	I1206 18:59:44.792354  245404 start.go:300] post-start starting for "addons-440984" (driver="docker")
	I1206 18:59:44.792364  245404 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 18:59:44.792438  245404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 18:59:44.792482  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 18:59:44.812165  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 18:59:44.919540  245404 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 18:59:44.923703  245404 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 18:59:44.923742  245404 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1206 18:59:44.923758  245404 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1206 18:59:44.923771  245404 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1206 18:59:44.923782  245404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-239434/.minikube/addons for local assets ...
	I1206 18:59:44.923847  245404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-239434/.minikube/files for local assets ...
	I1206 18:59:44.923874  245404 start.go:303] post-start completed in 131.513167ms
	I1206 18:59:44.924193  245404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-440984
	I1206 18:59:44.943395  245404 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/config.json ...
	I1206 18:59:44.943694  245404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 18:59:44.943748  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 18:59:44.968867  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 18:59:45.083905  245404 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 18:59:45.091792  245404 start.go:128] duration metric: createHost completed in 11.418212288s
	I1206 18:59:45.091822  245404 start.go:83] releasing machines lock for "addons-440984", held for 11.418354915s
	I1206 18:59:45.091935  245404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-440984
	I1206 18:59:45.141283  245404 ssh_runner.go:195] Run: cat /version.json
	I1206 18:59:45.141363  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 18:59:45.155467  245404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 18:59:45.155566  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 18:59:45.186886  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 18:59:45.200891  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 18:59:45.340707  245404 ssh_runner.go:195] Run: systemctl --version
	I1206 18:59:45.483350  245404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 18:59:45.489264  245404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1206 18:59:45.520358  245404 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1206 18:59:45.520441  245404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:59:45.555845  245404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1206 18:59:45.555876  245404 start.go:475] detecting cgroup driver to use...
	I1206 18:59:45.555909  245404 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1206 18:59:45.556023  245404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 18:59:45.576714  245404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1206 18:59:45.591808  245404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 18:59:45.606249  245404 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 18:59:45.606443  245404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 18:59:45.619446  245404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 18:59:45.631591  245404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 18:59:45.643608  245404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 18:59:45.655838  245404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 18:59:45.667287  245404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 18:59:45.680178  245404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 18:59:45.691178  245404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 18:59:45.701808  245404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 18:59:45.790284  245404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 18:59:45.908652  245404 start.go:475] detecting cgroup driver to use...
	I1206 18:59:45.908698  245404 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1206 18:59:45.908749  245404 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1206 18:59:45.928941  245404 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1206 18:59:45.929021  245404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 18:59:45.946918  245404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 18:59:45.968244  245404 ssh_runner.go:195] Run: which cri-dockerd
	I1206 18:59:45.973352  245404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1206 18:59:45.986408  245404 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1206 18:59:46.015737  245404 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1206 18:59:46.127301  245404 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1206 18:59:46.237411  245404 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1206 18:59:46.237599  245404 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1206 18:59:46.263079  245404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 18:59:46.379915  245404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1206 18:59:46.659418  245404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1206 18:59:46.763312  245404 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1206 18:59:46.865167  245404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1206 18:59:46.964796  245404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 18:59:47.066499  245404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1206 18:59:47.084004  245404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 18:59:47.189214  245404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1206 18:59:47.274782  245404 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1206 18:59:47.274923  245404 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1206 18:59:47.280001  245404 start.go:543] Will wait 60s for crictl version
	I1206 18:59:47.280073  245404 ssh_runner.go:195] Run: which crictl
	I1206 18:59:47.285255  245404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 18:59:47.344381  245404 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1206 18:59:47.344492  245404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1206 18:59:47.371817  245404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1206 18:59:47.402403  245404 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1206 18:59:47.402578  245404 cli_runner.go:164] Run: docker network inspect addons-440984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 18:59:47.420440  245404 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 18:59:47.425372  245404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:59:47.439379  245404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1206 18:59:47.439450  245404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1206 18:59:47.461128  245404 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1206 18:59:47.461152  245404 docker.go:601] Images already preloaded, skipping extraction
	I1206 18:59:47.461217  245404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1206 18:59:47.483407  245404 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1206 18:59:47.483451  245404 cache_images.go:84] Images are preloaded, skipping loading
	I1206 18:59:47.483522  245404 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1206 18:59:47.545593  245404 cni.go:84] Creating CNI manager for ""
	I1206 18:59:47.545621  245404 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 18:59:47.545652  245404 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 18:59:47.545678  245404 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-440984 NodeName:addons-440984 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 18:59:47.545828  245404 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-440984"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 18:59:47.545952  245404 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-440984 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-440984 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 18:59:47.546025  245404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 18:59:47.557028  245404 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 18:59:47.557111  245404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 18:59:47.571177  245404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 18:59:47.593269  245404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 18:59:47.614588  245404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1206 18:59:47.636496  245404 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 18:59:47.640989  245404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:59:47.654397  245404 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984 for IP: 192.168.49.2
	I1206 18:59:47.654483  245404 certs.go:190] acquiring lock for shared ca certs: {Name:mk1262bee946068d8c620546d5b1b1b1aa594d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:59:47.654654  245404 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17740-239434/.minikube/ca.key
	I1206 18:59:47.964829  245404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-239434/.minikube/ca.crt ...
	I1206 18:59:47.964860  245404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/ca.crt: {Name:mk5d40d865fc9a9ee9e15c41c79d8993cb994314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:59:47.965482  245404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-239434/.minikube/ca.key ...
	I1206 18:59:47.965497  245404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/ca.key: {Name:mk49d3813a8d60ba17bdf1c9ec810c65a7908e88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:59:47.965588  245404 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17740-239434/.minikube/proxy-client-ca.key
	I1206 18:59:48.326052  245404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-239434/.minikube/proxy-client-ca.crt ...
	I1206 18:59:48.326084  245404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/proxy-client-ca.crt: {Name:mk98eea9f41892c0d72380dd17fc581a544c3c23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:59:48.326947  245404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-239434/.minikube/proxy-client-ca.key ...
	I1206 18:59:48.326967  245404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/proxy-client-ca.key: {Name:mkc94e2b898dd37386544d96bd885fc3dcf880c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:59:48.327615  245404 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.key
	I1206 18:59:48.327634  245404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt with IP's: []
	I1206 18:59:48.897687  245404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt ...
	I1206 18:59:48.897723  245404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: {Name:mk7d47a3543586512afc1ccee554e05aeb41edc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:59:48.897912  245404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.key ...
	I1206 18:59:48.897928  245404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.key: {Name:mk1ec970ccf56f4ccd61f50f5a7f15ca53232a95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:59:48.898012  245404 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/apiserver.key.dd3b5fb2
	I1206 18:59:48.898032  245404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1206 18:59:49.310168  245404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/apiserver.crt.dd3b5fb2 ...
	I1206 18:59:49.310202  245404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/apiserver.crt.dd3b5fb2: {Name:mkd1e835d161d85f5e4fb847db93b7292260b075 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:59:49.311135  245404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/apiserver.key.dd3b5fb2 ...
	I1206 18:59:49.311158  245404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/apiserver.key.dd3b5fb2: {Name:mk273ece2ccc58f02234594b3b1e47d29415b06f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:59:49.311758  245404 certs.go:337] copying /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/apiserver.crt
	I1206 18:59:49.311850  245404 certs.go:341] copying /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/apiserver.key
	I1206 18:59:49.311903  245404 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/proxy-client.key
	I1206 18:59:49.311925  245404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/proxy-client.crt with IP's: []
	I1206 18:59:49.819222  245404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/proxy-client.crt ...
	I1206 18:59:49.819254  245404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/proxy-client.crt: {Name:mka8686d17973182b23441a5b722cdc30aa28be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:59:49.819833  245404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/proxy-client.key ...
	I1206 18:59:49.819852  245404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/proxy-client.key: {Name:mk8cb762b328234e96d802cc56d07539e8ca9b2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:59:49.820053  245404 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 18:59:49.820097  245404 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem (1078 bytes)
	I1206 18:59:49.820130  245404 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/home/jenkins/minikube-integration/17740-239434/.minikube/certs/cert.pem (1123 bytes)
	I1206 18:59:49.820165  245404 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/home/jenkins/minikube-integration/17740-239434/.minikube/certs/key.pem (1679 bytes)
	I1206 18:59:49.820866  245404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 18:59:49.849763  245404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 18:59:49.880142  245404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 18:59:49.909633  245404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 18:59:49.938482  245404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 18:59:49.968939  245404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 18:59:49.999128  245404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 18:59:50.040500  245404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 18:59:50.071227  245404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 18:59:50.102859  245404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 18:59:50.127691  245404 ssh_runner.go:195] Run: openssl version
	I1206 18:59:50.137846  245404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 18:59:50.150417  245404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:59:50.155539  245404 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:59 /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:59:50.155640  245404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:59:50.165064  245404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 18:59:50.178224  245404 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 18:59:50.183515  245404 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 18:59:50.183624  245404 kubeadm.go:404] StartCluster: {Name:addons-440984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-440984 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:59:50.183764  245404 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1206 18:59:50.206136  245404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 18:59:50.217494  245404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 18:59:50.228669  245404 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1206 18:59:50.228751  245404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 18:59:50.239934  245404 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 18:59:50.239980  245404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 18:59:50.302848  245404 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 18:59:50.302971  245404 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 18:59:50.364367  245404 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1206 18:59:50.364440  245404 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1206 18:59:50.364477  245404 kubeadm.go:322] OS: Linux
	I1206 18:59:50.364534  245404 kubeadm.go:322] CGROUPS_CPU: enabled
	I1206 18:59:50.364583  245404 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1206 18:59:50.364633  245404 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1206 18:59:50.364682  245404 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1206 18:59:50.364730  245404 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1206 18:59:50.364781  245404 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1206 18:59:50.364827  245404 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1206 18:59:50.364876  245404 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1206 18:59:50.364923  245404 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1206 18:59:50.445399  245404 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 18:59:50.445507  245404 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 18:59:50.445600  245404 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 18:59:50.801241  245404 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 18:59:50.806386  245404 out.go:204]   - Generating certificates and keys ...
	I1206 18:59:50.806504  245404 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 18:59:50.806611  245404 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 18:59:52.385717  245404 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 18:59:53.168062  245404 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1206 18:59:54.088897  245404 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1206 18:59:54.534974  245404 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1206 18:59:54.780404  245404 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1206 18:59:54.780750  245404 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-440984 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 18:59:54.996466  245404 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1206 18:59:54.996849  245404 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-440984 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 18:59:55.620016  245404 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 18:59:56.125789  245404 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 18:59:56.484890  245404 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1206 18:59:56.485270  245404 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 18:59:56.902162  245404 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 18:59:57.761397  245404 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 18:59:58.347987  245404 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 18:59:59.309277  245404 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 18:59:59.310085  245404 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 18:59:59.312943  245404 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 18:59:59.316230  245404 out.go:204]   - Booting up control plane ...
	I1206 18:59:59.316420  245404 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 18:59:59.316510  245404 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 18:59:59.317286  245404 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 18:59:59.336334  245404 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 18:59:59.336427  245404 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 18:59:59.336465  245404 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 18:59:59.438770  245404 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 19:00:08.440739  245404 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002491 seconds
	I1206 19:00:08.440856  245404 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 19:00:08.457047  245404 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 19:00:08.996670  245404 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 19:00:08.996858  245404 kubeadm.go:322] [mark-control-plane] Marking the node addons-440984 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 19:00:09.508425  245404 kubeadm.go:322] [bootstrap-token] Using token: 6z0dnl.a1fs7f6qnuhx6sg1
	I1206 19:00:09.510558  245404 out.go:204]   - Configuring RBAC rules ...
	I1206 19:00:09.510722  245404 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 19:00:09.521285  245404 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 19:00:09.531603  245404 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 19:00:09.535736  245404 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 19:00:09.541962  245404 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 19:00:09.546045  245404 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 19:00:09.565292  245404 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 19:00:09.802586  245404 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 19:00:09.926053  245404 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 19:00:09.927176  245404 kubeadm.go:322] 
	I1206 19:00:09.927249  245404 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 19:00:09.927259  245404 kubeadm.go:322] 
	I1206 19:00:09.927338  245404 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 19:00:09.927348  245404 kubeadm.go:322] 
	I1206 19:00:09.927372  245404 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 19:00:09.927431  245404 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 19:00:09.927485  245404 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 19:00:09.927494  245404 kubeadm.go:322] 
	I1206 19:00:09.927545  245404 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 19:00:09.927554  245404 kubeadm.go:322] 
	I1206 19:00:09.927599  245404 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 19:00:09.927608  245404 kubeadm.go:322] 
	I1206 19:00:09.927657  245404 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 19:00:09.927732  245404 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 19:00:09.927800  245404 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 19:00:09.927808  245404 kubeadm.go:322] 
	I1206 19:00:09.927887  245404 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 19:00:09.927971  245404 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 19:00:09.927979  245404 kubeadm.go:322] 
	I1206 19:00:09.928058  245404 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6z0dnl.a1fs7f6qnuhx6sg1 \
	I1206 19:00:09.928156  245404 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:372e7bdfa31dcfc44eafd3161d124bebbb6f7a71daed6ab3c52f0521e99d1a38 \
	I1206 19:00:09.928183  245404 kubeadm.go:322] 	--control-plane 
	I1206 19:00:09.928191  245404 kubeadm.go:322] 
	I1206 19:00:09.928302  245404 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 19:00:09.928318  245404 kubeadm.go:322] 
	I1206 19:00:09.928395  245404 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6z0dnl.a1fs7f6qnuhx6sg1 \
	I1206 19:00:09.928498  245404 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:372e7bdfa31dcfc44eafd3161d124bebbb6f7a71daed6ab3c52f0521e99d1a38 
	I1206 19:00:09.934320  245404 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1206 19:00:09.934436  245404 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 19:00:09.934457  245404 cni.go:84] Creating CNI manager for ""
	I1206 19:00:09.934484  245404 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 19:00:09.937231  245404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 19:00:09.939669  245404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 19:00:09.958673  245404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1206 19:00:09.993729  245404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 19:00:09.993889  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:09.993984  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=addons-440984 minikube.k8s.io/updated_at=2023_12_06T19_00_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:10.377408  245404 ops.go:34] apiserver oom_adj: -16
	I1206 19:00:10.377511  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:10.492699  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:11.089045  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:11.589061  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:12.089196  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:12.588510  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:13.089433  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:13.589361  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:14.089220  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:14.588527  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:15.088543  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:15.589198  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:16.089333  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:16.589107  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:17.088502  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:17.589082  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:18.089337  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:18.588890  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:19.089028  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:19.588437  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:20.088844  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:20.588862  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:21.089097  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:21.588992  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:22.088533  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:22.589052  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:23.088929  245404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:00:23.195403  245404 kubeadm.go:1088] duration metric: took 13.20155927s to wait for elevateKubeSystemPrivileges.
	I1206 19:00:23.195436  245404 kubeadm.go:406] StartCluster complete in 33.011816064s
	I1206 19:00:23.195454  245404 settings.go:142] acquiring lock: {Name:mk0fc622b23c24037d6b3f8b7cae60bf03ba98b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:00:23.195573  245404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-239434/kubeconfig
	I1206 19:00:23.196027  245404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/kubeconfig: {Name:mk2dc9f3d2c10f91cb0e51e097b71483e7cf911f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:00:23.198372  245404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 19:00:23.198660  245404 config.go:182] Loaded profile config "addons-440984": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1206 19:00:23.198711  245404 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1206 19:00:23.198794  245404 addons.go:69] Setting volumesnapshots=true in profile "addons-440984"
	I1206 19:00:23.198802  245404 addons.go:69] Setting cloud-spanner=true in profile "addons-440984"
	I1206 19:00:23.198812  245404 addons.go:231] Setting addon volumesnapshots=true in "addons-440984"
	I1206 19:00:23.198821  245404 addons.go:231] Setting addon cloud-spanner=true in "addons-440984"
	I1206 19:00:23.198849  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:23.198873  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:23.199326  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.199332  245404 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-440984"
	I1206 19:00:23.199362  245404 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-440984"
	I1206 19:00:23.199406  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:23.199810  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.199873  245404 addons.go:69] Setting gcp-auth=true in profile "addons-440984"
	I1206 19:00:23.199897  245404 mustload.go:65] Loading cluster: addons-440984
	I1206 19:00:23.200057  245404 config.go:182] Loaded profile config "addons-440984": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1206 19:00:23.200293  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.200709  245404 addons.go:69] Setting ingress=true in profile "addons-440984"
	I1206 19:00:23.200741  245404 addons.go:231] Setting addon ingress=true in "addons-440984"
	I1206 19:00:23.200791  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:23.201247  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.208604  245404 addons.go:69] Setting storage-provisioner=true in profile "addons-440984"
	I1206 19:00:23.208642  245404 addons.go:231] Setting addon storage-provisioner=true in "addons-440984"
	I1206 19:00:23.208698  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:23.209163  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.209622  245404 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-440984"
	I1206 19:00:23.209657  245404 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-440984"
	I1206 19:00:23.209949  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.198794  245404 addons.go:69] Setting default-storageclass=true in profile "addons-440984"
	I1206 19:00:23.216908  245404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-440984"
	I1206 19:00:23.199326  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.230728  245404 addons.go:69] Setting ingress-dns=true in profile "addons-440984"
	I1206 19:00:23.230992  245404 addons.go:231] Setting addon ingress-dns=true in "addons-440984"
	I1206 19:00:23.231117  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:23.239622  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.230912  245404 addons.go:69] Setting metrics-server=true in profile "addons-440984"
	I1206 19:00:23.263661  245404 addons.go:231] Setting addon metrics-server=true in "addons-440984"
	I1206 19:00:23.263770  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:23.264337  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.230902  245404 addons.go:69] Setting inspektor-gadget=true in profile "addons-440984"
	I1206 19:00:23.271294  245404 addons.go:231] Setting addon inspektor-gadget=true in "addons-440984"
	I1206 19:00:23.271357  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:23.271808  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.230917  245404 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-440984"
	I1206 19:00:23.280412  245404 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-440984"
	I1206 19:00:23.280500  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:23.281006  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.230923  245404 addons.go:69] Setting registry=true in profile "addons-440984"
	I1206 19:00:23.288444  245404 addons.go:231] Setting addon registry=true in "addons-440984"
	I1206 19:00:23.288507  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:23.289052  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.307743  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.438308  245404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 19:00:23.408679  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:23.417245  245404 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-440984"
	I1206 19:00:23.443790  245404 addons.go:231] Setting addon default-storageclass=true in "addons-440984"
	I1206 19:00:23.472340  245404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1206 19:00:23.472399  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:23.472414  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:23.475796  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.476546  245404 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 19:00:23.481393  245404 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 19:00:23.481423  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 19:00:23.481487  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:23.476717  245404 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1206 19:00:23.476722  245404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:00:23.476733  245404 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 19:00:23.477228  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:23.493336  245404 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1206 19:00:23.501279  245404 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1206 19:00:23.501289  245404 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1206 19:00:23.501292  245404 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1206 19:00:23.501304  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 19:00:23.506084  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:23.507293  245404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1206 19:00:23.514592  245404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1206 19:00:23.507641  245404 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:00:23.518102  245404 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 19:00:23.518114  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 19:00:23.521247  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:23.524508  245404 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1206 19:00:23.524532  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1206 19:00:23.524599  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:23.531059  245404 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 19:00:23.536426  245404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 19:00:23.531293  245404 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 19:00:23.531305  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1206 19:00:23.531571  245404 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 19:00:23.538784  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 19:00:23.538863  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:23.560699  245404 out.go:177]   - Using image docker.io/registry:2.8.3
	I1206 19:00:23.553320  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1206 19:00:23.553388  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:23.559653  245404 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 19:00:23.575441  245404 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1206 19:00:23.568297  245404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 19:00:23.568411  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:23.568526  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 19:00:23.584550  245404 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 19:00:23.584852  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1206 19:00:23.584952  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:23.601391  245404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 19:00:23.600803  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:23.601055  245404 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-440984" context rescaled to 1 replicas
	I1206 19:00:23.633810  245404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 19:00:23.646263  245404 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1206 19:00:23.648699  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 19:00:23.649366  245404 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1206 19:00:23.656428  245404 out.go:177] * Verifying Kubernetes components...
	I1206 19:00:23.658883  245404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:00:23.650878  245404 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 19:00:23.658980  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 19:00:23.659026  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:23.650888  245404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 19:00:23.683486  245404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 19:00:23.692454  245404 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 19:00:23.688494  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 19:00:23.690051  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 19:00:23.696678  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 19:00:23.696759  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:23.757565  245404 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 19:00:23.772408  245404 out.go:177]   - Using image docker.io/busybox:stable
	I1206 19:00:23.775367  245404 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 19:00:23.775391  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 19:00:23.775459  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:23.774727  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 19:00:23.779064  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 19:00:23.785336  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 19:00:23.787522  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 19:00:23.853155  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 19:00:23.878742  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 19:00:23.880399  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 19:00:23.884161  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 19:00:23.889530  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	W1206 19:00:23.890745  245404 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 19:00:23.890767  245404 retry.go:31] will retry after 125.409055ms: ssh: handshake failed: EOF
	I1206 19:00:24.558334  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:00:24.580663  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 19:00:24.665709  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 19:00:24.736692  245404 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 19:00:24.736726  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 19:00:24.764234  245404 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1206 19:00:24.764266  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1206 19:00:24.776891  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 19:00:24.809229  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 19:00:24.914772  245404 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 19:00:24.914800  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 19:00:24.942255  245404 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 19:00:24.942283  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 19:00:25.120892  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 19:00:25.155193  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 19:00:25.199575  245404 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 19:00:25.199603  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 19:00:25.220241  245404 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1206 19:00:25.220303  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1206 19:00:25.303545  245404 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 19:00:25.303567  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 19:00:25.388519  245404 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 19:00:25.388544  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 19:00:25.421669  245404 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 19:00:25.421698  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 19:00:25.500947  245404 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 19:00:25.500976  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 19:00:25.643273  245404 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1206 19:00:25.643303  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1206 19:00:25.650483  245404 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 19:00:25.650518  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 19:00:25.746763  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 19:00:25.773448  245404 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 19:00:25.773476  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 19:00:25.811775  245404 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 19:00:25.811821  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 19:00:25.967276  245404 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1206 19:00:25.967303  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1206 19:00:25.979538  245404 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 19:00:25.979566  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 19:00:26.240361  245404 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 19:00:26.240387  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 19:00:26.341158  245404 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1206 19:00:26.341181  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1206 19:00:26.358543  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 19:00:26.433769  245404 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 19:00:26.433794  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 19:00:26.464910  245404 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 19:00:26.464976  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 19:00:26.472770  245404 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1206 19:00:26.472836  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1206 19:00:26.620976  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 19:00:26.667390  245404 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 19:00:26.667473  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 19:00:26.824586  245404 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1206 19:00:26.824613  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1206 19:00:27.024990  245404 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 19:00:27.025023  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 19:00:27.173242  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1206 19:00:27.307753  245404 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 19:00:27.307779  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 19:00:27.329638  245404 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.695705773s)
	I1206 19:00:27.329670  245404 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 19:00:27.329694  245404 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.670791791s)
	I1206 19:00:27.330567  245404 node_ready.go:35] waiting up to 6m0s for node "addons-440984" to be "Ready" ...
	I1206 19:00:27.335819  245404 node_ready.go:49] node "addons-440984" has status "Ready":"True"
	I1206 19:00:27.335862  245404 node_ready.go:38] duration metric: took 5.264112ms waiting for node "addons-440984" to be "Ready" ...
	I1206 19:00:27.335874  245404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:00:27.360387  245404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q474z" in "kube-system" namespace to be "Ready" ...
	I1206 19:00:27.604523  245404 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 19:00:27.604594  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 19:00:27.848504  245404 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 19:00:27.848580  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 19:00:28.149679  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 19:00:29.327930  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.769556363s)
	I1206 19:00:29.327988  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.747299207s)
	I1206 19:00:29.328214  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.66247712s)
	W1206 19:00:29.342953  245404 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1206 19:00:29.390291  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:30.089716  245404 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 19:00:30.089844  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:30.122091  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 19:00:31.337407  245404 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 19:00:31.634550  245404 addons.go:231] Setting addon gcp-auth=true in "addons-440984"
	I1206 19:00:31.634653  245404 host.go:66] Checking if "addons-440984" exists ...
	I1206 19:00:31.635226  245404 cli_runner.go:164] Run: docker container inspect addons-440984 --format={{.State.Status}}
	I1206 19:00:31.690972  245404 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 19:00:31.691026  245404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440984
	I1206 19:00:31.738486  245404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/addons-440984/id_rsa Username:docker}
	I1206 19:00:31.902942  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:33.936262  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:33.953277  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.176356013s)
	I1206 19:00:33.953359  245404 addons.go:467] Verifying addon ingress=true in "addons-440984"
	I1206 19:00:33.956691  245404 out.go:177] * Verifying ingress addon...
	I1206 19:00:33.953519  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.832600775s)
	I1206 19:00:33.953547  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.798331449s)
	I1206 19:00:33.953572  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.206782937s)
	I1206 19:00:33.953792  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.595209871s)
	I1206 19:00:33.953382  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.143985011s)
	I1206 19:00:33.953907  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.332862659s)
	I1206 19:00:33.953967  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.780695122s)
	I1206 19:00:33.956881  245404 addons.go:467] Verifying addon registry=true in "addons-440984"
	I1206 19:00:33.957008  245404 addons.go:467] Verifying addon metrics-server=true in "addons-440984"
	W1206 19:00:33.957038  245404 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 19:00:33.960860  245404 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 19:00:33.962886  245404 out.go:177] * Verifying registry addon...
	I1206 19:00:33.966892  245404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 19:00:33.963012  245404 retry.go:31] will retry after 334.302992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 19:00:33.970065  245404 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 19:00:33.970091  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:33.976079  245404 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 19:00:33.976148  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:33.979038  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:33.981901  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:34.302158  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 19:00:34.483857  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:34.486839  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:34.989591  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:34.990094  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:35.489057  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:35.489768  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:35.721419  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.571638786s)
	I1206 19:00:35.721449  245404 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-440984"
	I1206 19:00:35.724475  245404 out.go:177] * Verifying csi-hostpath-driver addon...
	I1206 19:00:35.721858  245404 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.030761793s)
	I1206 19:00:35.729745  245404 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1206 19:00:35.727671  245404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 19:00:35.734128  245404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1206 19:00:35.736482  245404 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 19:00:35.736508  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 19:00:35.775445  245404 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 19:00:35.775527  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:35.799230  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:35.877712  245404 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 19:00:35.877872  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 19:00:35.937136  245404 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 19:00:35.937210  245404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1206 19:00:35.986266  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:35.991993  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:36.087528  245404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 19:00:36.306152  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:36.383441  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:36.484538  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:36.488810  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:36.814001  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:36.984919  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:36.989213  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:37.001554  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.69928642s)
	I1206 19:00:37.306339  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:37.520020  245404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.432399907s)
	I1206 19:00:37.522810  245404 addons.go:467] Verifying addon gcp-auth=true in "addons-440984"
	I1206 19:00:37.534937  245404 out.go:177] * Verifying gcp-auth addon...
	I1206 19:00:37.527938  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:37.529621  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:37.544811  245404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 19:00:37.567957  245404 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 19:00:37.567986  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:37.579729  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:37.805275  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:37.983902  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:37.989308  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:38.084483  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:38.305476  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:38.484880  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:38.487818  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:38.584096  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:38.805269  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:38.883665  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:38.985795  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:38.987943  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:39.084017  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:39.307160  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:39.483956  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:39.487682  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:39.583786  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:39.807373  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:39.986873  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:39.988658  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:40.084108  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:40.306571  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:40.486057  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:40.489874  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:40.583652  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:40.806847  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:40.885523  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:40.988679  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:40.989471  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:41.086730  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:41.305722  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:41.485594  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:41.488264  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:41.583680  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:41.805313  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:41.983731  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:41.988582  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:42.085761  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:42.305924  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:42.489890  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:42.494396  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:42.584346  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:42.805389  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:42.984748  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:42.989152  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:43.084801  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:43.305756  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:43.383732  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:43.485799  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:43.489568  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:43.584445  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:43.806075  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:43.986270  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:43.990192  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:44.084535  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:44.305790  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:44.485122  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:44.487403  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:44.584256  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:44.805348  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:44.984685  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:44.989133  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:45.104796  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:45.309391  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:45.484329  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:45.487576  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:45.584493  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:45.805251  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:45.883025  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:45.984379  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:45.989576  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:46.084714  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:46.306497  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:46.486584  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:46.489387  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:46.583831  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:46.806360  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:46.984005  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:46.987009  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:47.083632  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:47.305722  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:47.484210  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:47.487807  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:47.583308  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:47.805690  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:47.983905  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:47.987654  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:48.084126  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:48.305115  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:48.382852  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:48.487024  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:48.487545  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:48.584361  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:48.805857  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:48.987960  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:48.995603  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:49.084818  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:49.305867  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:49.485128  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:49.486649  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:49.584420  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:49.807024  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:49.985912  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:49.989658  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:50.083957  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:50.306286  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:50.382914  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:50.490238  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:50.491988  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:50.583890  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:50.805967  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:50.984795  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:50.987446  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:51.084414  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:51.306419  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:51.484264  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:51.487278  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:51.583944  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:51.806672  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:51.984266  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:51.989260  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:52.084081  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:52.307966  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:52.389736  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:52.486192  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:52.488252  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:52.584649  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:52.806534  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:52.984775  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:52.992295  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:53.084029  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:53.312099  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:53.488432  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:53.488937  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:53.584887  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:53.805924  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:53.983871  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:53.987737  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:54.085339  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:54.309406  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:54.485190  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:54.490113  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:54.584476  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:54.806708  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:54.884252  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:54.989141  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:54.999307  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:55.086978  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:55.307659  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:55.487561  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:55.488596  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:55.584764  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:55.805320  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:55.986075  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:55.997048  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:56.084156  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:56.307103  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:56.489457  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:56.491201  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:56.587043  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:56.806091  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:56.987369  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:56.988055  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:57.084244  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:57.311577  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:57.382763  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:57.488806  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:57.489547  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:57.583863  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:57.805968  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:57.987116  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:57.989755  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:58.083984  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:58.306132  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:58.486200  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:58.489682  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:58.588148  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:58.805825  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:58.987225  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:58.988702  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:59.083757  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:59.305514  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:59.388423  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:00:59.486103  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:59.487572  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:00:59.584143  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:00:59.805720  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:00:59.988440  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:00:59.989890  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:00.112238  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:00.344843  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:00.489536  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:00.494648  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:00.595943  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:00.806267  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:00.989095  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:00.990396  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:01.085246  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:01.307658  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:01.487399  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:01.490458  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:01.584560  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:01.806773  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:01.885495  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:01:01.994447  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:01.996630  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:02.083900  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:02.307179  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:02.485190  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:02.488672  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:02.585793  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:02.807530  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:03.008013  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:03.018399  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:03.090126  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:03.308852  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:03.486205  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:03.495106  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:03.584792  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:03.807066  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:03.887259  245404 pod_ready.go:102] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"False"
	I1206 19:01:03.987889  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:03.992431  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:04.084847  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:04.305877  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:04.392110  245404 pod_ready.go:92] pod "coredns-5dd5756b68-q474z" in "kube-system" namespace has status "Ready":"True"
	I1206 19:01:04.392135  245404 pod_ready.go:81] duration metric: took 37.031713842s waiting for pod "coredns-5dd5756b68-q474z" in "kube-system" namespace to be "Ready" ...
	I1206 19:01:04.392147  245404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-440984" in "kube-system" namespace to be "Ready" ...
	I1206 19:01:04.406184  245404 pod_ready.go:92] pod "etcd-addons-440984" in "kube-system" namespace has status "Ready":"True"
	I1206 19:01:04.406262  245404 pod_ready.go:81] duration metric: took 14.106031ms waiting for pod "etcd-addons-440984" in "kube-system" namespace to be "Ready" ...
	I1206 19:01:04.406289  245404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-440984" in "kube-system" namespace to be "Ready" ...
	I1206 19:01:04.415495  245404 pod_ready.go:92] pod "kube-apiserver-addons-440984" in "kube-system" namespace has status "Ready":"True"
	I1206 19:01:04.415571  245404 pod_ready.go:81] duration metric: took 9.257517ms waiting for pod "kube-apiserver-addons-440984" in "kube-system" namespace to be "Ready" ...
	I1206 19:01:04.415600  245404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-440984" in "kube-system" namespace to be "Ready" ...
	I1206 19:01:04.426785  245404 pod_ready.go:92] pod "kube-controller-manager-addons-440984" in "kube-system" namespace has status "Ready":"True"
	I1206 19:01:04.426859  245404 pod_ready.go:81] duration metric: took 11.237261ms waiting for pod "kube-controller-manager-addons-440984" in "kube-system" namespace to be "Ready" ...
	I1206 19:01:04.426887  245404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xnlb6" in "kube-system" namespace to be "Ready" ...
	I1206 19:01:04.446107  245404 pod_ready.go:92] pod "kube-proxy-xnlb6" in "kube-system" namespace has status "Ready":"True"
	I1206 19:01:04.446183  245404 pod_ready.go:81] duration metric: took 19.274621ms waiting for pod "kube-proxy-xnlb6" in "kube-system" namespace to be "Ready" ...
	I1206 19:01:04.446211  245404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-440984" in "kube-system" namespace to be "Ready" ...
	I1206 19:01:04.486990  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:04.491109  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:04.584528  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:04.780109  245404 pod_ready.go:92] pod "kube-scheduler-addons-440984" in "kube-system" namespace has status "Ready":"True"
	I1206 19:01:04.780180  245404 pod_ready.go:81] duration metric: took 333.937304ms waiting for pod "kube-scheduler-addons-440984" in "kube-system" namespace to be "Ready" ...
	I1206 19:01:04.780206  245404 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zwpkt" in "kube-system" namespace to be "Ready" ...
	I1206 19:01:04.816590  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:04.989543  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:04.989946  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:05.084063  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:05.180171  245404 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-zwpkt" in "kube-system" namespace has status "Ready":"True"
	I1206 19:01:05.180199  245404 pod_ready.go:81] duration metric: took 399.971828ms waiting for pod "nvidia-device-plugin-daemonset-zwpkt" in "kube-system" namespace to be "Ready" ...
	I1206 19:01:05.180210  245404 pod_ready.go:38] duration metric: took 37.84432507s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:01:05.180229  245404 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:01:05.180332  245404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:01:05.205794  245404 api_server.go:72] duration metric: took 41.555881801s to wait for apiserver process to appear ...
	I1206 19:01:05.205821  245404 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:01:05.205839  245404 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 19:01:05.216052  245404 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 19:01:05.217521  245404 api_server.go:141] control plane version: v1.28.4
	I1206 19:01:05.217550  245404 api_server.go:131] duration metric: took 11.721683ms to wait for apiserver health ...
	I1206 19:01:05.217560  245404 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:01:05.305706  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:05.386722  245404 system_pods.go:59] 17 kube-system pods found
	I1206 19:01:05.386768  245404 system_pods.go:61] "coredns-5dd5756b68-q474z" [43813035-fa24-4884-9595-aeb4936f37e0] Running
	I1206 19:01:05.386781  245404 system_pods.go:61] "csi-hostpath-attacher-0" [be16615b-9235-4fc8-b021-5077a5f072ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 19:01:05.386791  245404 system_pods.go:61] "csi-hostpath-resizer-0" [f922f8cb-881e-4399-b284-68e4911159c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 19:01:05.386802  245404 system_pods.go:61] "csi-hostpathplugin-jb6kf" [b948b9df-7265-41ee-b0e3-875e0f6f8872] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 19:01:05.386808  245404 system_pods.go:61] "etcd-addons-440984" [2fa5fb2e-c75c-4029-aa7e-cf2088af3fe3] Running
	I1206 19:01:05.386814  245404 system_pods.go:61] "kube-apiserver-addons-440984" [84da69d1-4758-4b7b-b50a-cf97d38af4af] Running
	I1206 19:01:05.386822  245404 system_pods.go:61] "kube-controller-manager-addons-440984" [982b3920-bd52-40c1-bac3-db34c9f2acf6] Running
	I1206 19:01:05.386837  245404 system_pods.go:61] "kube-ingress-dns-minikube" [daaff04f-2761-4cb5-9a70-d6c8d773456c] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 19:01:05.386848  245404 system_pods.go:61] "kube-proxy-xnlb6" [d008fee1-3d5e-4d5d-8233-0dc8768f79bc] Running
	I1206 19:01:05.386854  245404 system_pods.go:61] "kube-scheduler-addons-440984" [77bc0984-946f-41f6-9d22-4492ccccb13c] Running
	I1206 19:01:05.386868  245404 system_pods.go:61] "metrics-server-7c66d45ddc-gvs8h" [3f16f94b-9ea8-447f-bc5a-405dca598bb1] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:01:05.386874  245404 system_pods.go:61] "nvidia-device-plugin-daemonset-zwpkt" [40ec6200-edb6-432d-8664-84c6a52db627] Running
	I1206 19:01:05.386884  245404 system_pods.go:61] "registry-ctltk" [29445e78-d0ab-477b-aa78-a2b25b760193] Running
	I1206 19:01:05.386890  245404 system_pods.go:61] "registry-proxy-d4bz6" [2271c7ec-a452-465a-a1f4-4f286f7b4c6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 19:01:05.386896  245404 system_pods.go:61] "snapshot-controller-58dbcc7b99-j2j45" [9f82e322-719c-45c6-9655-e96ecc12cfc1] Running
	I1206 19:01:05.386902  245404 system_pods.go:61] "snapshot-controller-58dbcc7b99-wzbkm" [d686e54a-f49e-4920-ae5a-efa708dc5fcb] Running
	I1206 19:01:05.386907  245404 system_pods.go:61] "storage-provisioner" [0f1a6ddf-c822-4e64-8b74-e312be4bd309] Running
	I1206 19:01:05.386913  245404 system_pods.go:74] duration metric: took 169.347154ms to wait for pod list to return data ...
	I1206 19:01:05.386932  245404 default_sa.go:34] waiting for default service account to be created ...
	I1206 19:01:05.487936  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:05.488709  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:05.579483  245404 default_sa.go:45] found service account: "default"
	I1206 19:01:05.579511  245404 default_sa.go:55] duration metric: took 192.571867ms for default service account to be created ...
	I1206 19:01:05.579522  245404 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 19:01:05.583992  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:05.788160  245404 system_pods.go:86] 17 kube-system pods found
	I1206 19:01:05.788193  245404 system_pods.go:89] "coredns-5dd5756b68-q474z" [43813035-fa24-4884-9595-aeb4936f37e0] Running
	I1206 19:01:05.788206  245404 system_pods.go:89] "csi-hostpath-attacher-0" [be16615b-9235-4fc8-b021-5077a5f072ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 19:01:05.788217  245404 system_pods.go:89] "csi-hostpath-resizer-0" [f922f8cb-881e-4399-b284-68e4911159c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 19:01:05.788235  245404 system_pods.go:89] "csi-hostpathplugin-jb6kf" [b948b9df-7265-41ee-b0e3-875e0f6f8872] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 19:01:05.788245  245404 system_pods.go:89] "etcd-addons-440984" [2fa5fb2e-c75c-4029-aa7e-cf2088af3fe3] Running
	I1206 19:01:05.788252  245404 system_pods.go:89] "kube-apiserver-addons-440984" [84da69d1-4758-4b7b-b50a-cf97d38af4af] Running
	I1206 19:01:05.788257  245404 system_pods.go:89] "kube-controller-manager-addons-440984" [982b3920-bd52-40c1-bac3-db34c9f2acf6] Running
	I1206 19:01:05.788266  245404 system_pods.go:89] "kube-ingress-dns-minikube" [daaff04f-2761-4cb5-9a70-d6c8d773456c] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 19:01:05.788309  245404 system_pods.go:89] "kube-proxy-xnlb6" [d008fee1-3d5e-4d5d-8233-0dc8768f79bc] Running
	I1206 19:01:05.788315  245404 system_pods.go:89] "kube-scheduler-addons-440984" [77bc0984-946f-41f6-9d22-4492ccccb13c] Running
	I1206 19:01:05.788325  245404 system_pods.go:89] "metrics-server-7c66d45ddc-gvs8h" [3f16f94b-9ea8-447f-bc5a-405dca598bb1] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 19:01:05.788334  245404 system_pods.go:89] "nvidia-device-plugin-daemonset-zwpkt" [40ec6200-edb6-432d-8664-84c6a52db627] Running
	I1206 19:01:05.788340  245404 system_pods.go:89] "registry-ctltk" [29445e78-d0ab-477b-aa78-a2b25b760193] Running
	I1206 19:01:05.788346  245404 system_pods.go:89] "registry-proxy-d4bz6" [2271c7ec-a452-465a-a1f4-4f286f7b4c6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 19:01:05.788357  245404 system_pods.go:89] "snapshot-controller-58dbcc7b99-j2j45" [9f82e322-719c-45c6-9655-e96ecc12cfc1] Running
	I1206 19:01:05.788362  245404 system_pods.go:89] "snapshot-controller-58dbcc7b99-wzbkm" [d686e54a-f49e-4920-ae5a-efa708dc5fcb] Running
	I1206 19:01:05.788367  245404 system_pods.go:89] "storage-provisioner" [0f1a6ddf-c822-4e64-8b74-e312be4bd309] Running
	I1206 19:01:05.788376  245404 system_pods.go:126] duration metric: took 208.849014ms to wait for k8s-apps to be running ...
	I1206 19:01:05.788386  245404 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 19:01:05.788449  245404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:01:05.808365  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:05.809159  245404 system_svc.go:56] duration metric: took 20.767504ms WaitForService to wait for kubelet.
	I1206 19:01:05.809223  245404 kubeadm.go:581] duration metric: took 42.159316155s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 19:01:05.809259  245404 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:01:05.982099  245404 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1206 19:01:05.982193  245404 node_conditions.go:123] node cpu capacity is 2
	I1206 19:01:05.982221  245404 node_conditions.go:105] duration metric: took 172.936299ms to run NodePressure ...
	I1206 19:01:05.982268  245404 start.go:228] waiting for startup goroutines ...
	I1206 19:01:05.988764  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:05.992038  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:06.084583  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:06.307875  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:06.484223  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:06.488109  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:06.584647  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:06.806347  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:06.984173  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:06.989027  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:07.086014  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:07.305238  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:07.489248  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:07.490725  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 19:01:07.583315  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:07.805175  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:07.986727  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:07.989112  245404 kapi.go:107] duration metric: took 34.022219186s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 19:01:08.084570  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:08.305884  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:08.483842  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:08.583595  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:08.810336  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:08.984313  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:09.084556  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:09.305972  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:09.484320  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:09.584585  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:09.806926  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:09.987839  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:10.088023  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:10.305640  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:10.483604  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:10.590835  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:10.806641  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:10.985172  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:11.085096  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:11.306378  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:11.484156  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:11.584768  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:11.808070  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:11.983715  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:12.084433  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:12.308496  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:12.485668  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:12.583696  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:12.806078  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:12.984721  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:13.083482  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:13.314112  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:13.485330  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:13.583708  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:13.808885  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:13.983706  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:14.085046  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:14.305887  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:14.484873  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:14.584050  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:14.806425  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:14.984245  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:15.086065  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:15.305067  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:15.484639  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:15.584885  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:15.814028  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:15.984393  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:16.086893  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:16.306267  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:16.499976  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:16.584078  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:16.806237  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:16.985222  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:17.084515  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:17.307017  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:17.484615  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:17.586380  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:17.805639  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:17.984183  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:18.085839  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:18.307014  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:18.486920  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:18.585367  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:18.805348  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:18.983842  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:19.083821  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:19.305659  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:19.489030  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:19.595076  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:19.806299  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:19.984620  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:20.085238  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:20.305138  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:20.485069  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:20.584395  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:20.807180  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:20.984444  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:21.084795  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:21.306038  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:21.485798  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:21.583559  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:21.807176  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:21.984231  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:22.084147  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:22.305505  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:22.484016  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:22.584449  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:22.806823  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:22.984976  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:23.084421  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:23.305554  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 19:01:23.483931  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:23.584511  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:23.805662  245404 kapi.go:107] duration metric: took 48.077993745s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 19:01:23.984263  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:24.084211  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:24.483548  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:24.584950  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:24.984141  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:25.084827  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:25.483972  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:25.584473  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:25.984128  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:26.084541  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:26.483445  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:26.584340  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:26.983733  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:27.084618  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:27.484006  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:27.583962  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:27.984403  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:28.084338  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:28.483517  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:28.584579  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:28.984863  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:29.083553  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:29.484355  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:29.584107  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:29.984685  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:30.091645  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:30.484033  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:30.583897  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:30.984882  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:31.084556  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:31.483579  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:31.584540  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:31.984672  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:32.084241  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:32.485341  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:32.584153  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:32.984941  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:33.084796  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:33.484975  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:33.583652  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:33.983740  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:34.083612  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:34.484920  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:34.583670  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:34.984495  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:35.084885  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:35.483770  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:35.583424  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:35.983647  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:36.085031  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:36.485433  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:36.586076  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:36.985743  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:37.089854  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:37.484242  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:37.584290  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:37.983617  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:38.085161  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:38.484031  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:38.583600  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:38.985588  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:39.083780  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:39.484551  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:39.584040  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:39.984691  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:40.085554  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:40.485233  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:40.584716  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:40.984742  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:41.083480  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:41.484235  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:41.583909  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:41.983617  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:42.085249  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:42.484897  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:42.583994  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:42.984226  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:43.084334  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:43.484820  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:43.584453  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:43.984988  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:44.084425  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:44.487184  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:44.584399  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:44.988415  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:45.086185  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:45.485979  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:45.585689  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:45.983682  245404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 19:01:46.083801  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:46.491709  245404 kapi.go:107] duration metric: took 1m12.530847252s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 19:01:46.583600  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:47.085240  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:47.584480  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:48.092026  245404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 19:01:48.587420  245404 kapi.go:107] duration metric: took 1m11.042607123s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 19:01:48.589566  245404 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-440984 cluster.
	I1206 19:01:48.591740  245404 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 19:01:48.593958  245404 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 19:01:48.596404  245404 out.go:177] * Enabled addons: storage-provisioner, storage-provisioner-rancher, ingress-dns, cloud-spanner, nvidia-device-plugin, inspektor-gadget, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1206 19:01:48.598833  245404 addons.go:502] enable addons completed in 1m25.400134616s: enabled=[storage-provisioner storage-provisioner-rancher ingress-dns cloud-spanner nvidia-device-plugin inspektor-gadget metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1206 19:01:48.598886  245404 start.go:233] waiting for cluster config update ...
	I1206 19:01:48.598921  245404 start.go:242] writing updated cluster config ...
	I1206 19:01:48.599243  245404 ssh_runner.go:195] Run: rm -f paused
	I1206 19:01:48.949039  245404 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 19:01:48.951083  245404 out.go:177] * Done! kubectl is now configured to use "addons-440984" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Dec 06 19:02:39 addons-440984 dockerd[1098]: time="2023-12-06T19:02:39.924207817Z" level=info msg="ignoring event" container=a6dccaac0daa368a24872c314f68e79fead0571751cd99a1d4cea942d82a6d6c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:39 addons-440984 dockerd[1098]: time="2023-12-06T19:02:39.950460419Z" level=info msg="ignoring event" container=06c37a5dc7fa3a930385a0be3f5257f5783f9b4f89bed0d661e9f9c0340e1e96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:39 addons-440984 dockerd[1098]: time="2023-12-06T19:02:39.965137076Z" level=info msg="ignoring event" container=c124f441648d5be4d489a8ebe7b8dbe106b3ae87e5c055d75c920383e9697855 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:40 addons-440984 dockerd[1098]: time="2023-12-06T19:02:40.113304318Z" level=info msg="ignoring event" container=9196fd537a9a646300b4b05798885dcc68c8a66419af78bc58753f25bb0ce4d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:40 addons-440984 dockerd[1098]: time="2023-12-06T19:02:40.211169293Z" level=info msg="ignoring event" container=e0cb810b83561518f0e00c7199433c7ccfa7bc485ebc6f07e35159e7c557bd34 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:40 addons-440984 dockerd[1098]: time="2023-12-06T19:02:40.239385828Z" level=info msg="ignoring event" container=e49942b85a220adf420c950c17fd0ebb98895760c36577b006e480cf9a01cf43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:46 addons-440984 dockerd[1098]: time="2023-12-06T19:02:46.617041619Z" level=info msg="ignoring event" container=1679407f4497bee9abf85fefe9dffd1692f4825558b2878616722557eea39da7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:46 addons-440984 dockerd[1098]: time="2023-12-06T19:02:46.646133062Z" level=info msg="ignoring event" container=92f6190c44574f59289b60e16b703de243a5dacf04054408fbab850a80cf21ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:46 addons-440984 dockerd[1098]: time="2023-12-06T19:02:46.790651422Z" level=info msg="ignoring event" container=4efb559ab3cf4f6f2cbd1a0fa5cf1462aa21b99618612320493498633de9fae5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:46 addons-440984 dockerd[1098]: time="2023-12-06T19:02:46.833016886Z" level=info msg="ignoring event" container=e09cdbbde84b30283da22aafe56e8020770943b1dff0f3d75ed91bafc04b9906 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:48 addons-440984 dockerd[1098]: time="2023-12-06T19:02:48.280258424Z" level=info msg="ignoring event" container=6ebf9b276a0fb68dbb402108bfb5c93e6af57f74967a7aa1c38193f815f1fa78 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:48 addons-440984 cri-dockerd[1308]: time="2023-12-06T19:02:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/77b4f82c938333288feaaf88f32337bfedc52dda39da1c078fc95c86b2b2baa5/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Dec 06 19:02:48 addons-440984 dockerd[1098]: time="2023-12-06T19:02:48.599690120Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 19:02:49 addons-440984 cri-dockerd[1308]: time="2023-12-06T19:02:49Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 19:02:49 addons-440984 dockerd[1098]: time="2023-12-06T19:02:49.515739589Z" level=info msg="ignoring event" container=877ad8797c15d7ae3ae900a445baeb800a0e83d1012b044720d0d4280c78b245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:50 addons-440984 dockerd[1098]: time="2023-12-06T19:02:50.899495679Z" level=info msg="ignoring event" container=77b4f82c938333288feaaf88f32337bfedc52dda39da1c078fc95c86b2b2baa5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:51 addons-440984 dockerd[1098]: time="2023-12-06T19:02:51.229630355Z" level=info msg="ignoring event" container=f99be92dc0f82a8d01b2c0c9d3effacb3c7825b38bd62ad1d6a72d2f941328ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:52 addons-440984 dockerd[1098]: time="2023-12-06T19:02:52.673684901Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=4fdd8888c4108341c479fc1f34cb86ba12492edb0e25c91ca01971535039bdf2
	Dec 06 19:02:52 addons-440984 dockerd[1098]: time="2023-12-06T19:02:52.761773696Z" level=info msg="ignoring event" container=4fdd8888c4108341c479fc1f34cb86ba12492edb0e25c91ca01971535039bdf2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:52 addons-440984 cri-dockerd[1308]: time="2023-12-06T19:02:52Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"ingress-nginx-controller-7c6974c4d8-nxj8g_ingress-nginx\": unexpected command output nsenter: cannot open /proc/8514/ns/net: No such file or directory\n with error: exit status 1"
	Dec 06 19:02:52 addons-440984 dockerd[1098]: time="2023-12-06T19:02:52.931093528Z" level=info msg="ignoring event" container=74e811b2a02cf6a8f2dfaefe44a552e05d87c317adeb9cbb7518e2a3a23b95af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:53 addons-440984 cri-dockerd[1308]: time="2023-12-06T19:02:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/61d61b00152b127c1b8ff3b58b2b4a56ad393681299becf71989f0ca1d020c33/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Dec 06 19:02:54 addons-440984 cri-dockerd[1308]: time="2023-12-06T19:02:54Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Dec 06 19:02:54 addons-440984 dockerd[1098]: time="2023-12-06T19:02:54.398916399Z" level=info msg="ignoring event" container=c616b762ce51cac5cb29f5d0c47098379f8d5649a1bc7a20314ec7e699532c8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:02:56 addons-440984 dockerd[1098]: time="2023-12-06T19:02:56.131461820Z" level=info msg="ignoring event" container=61d61b00152b127c1b8ff3b58b2b4a56ad393681299becf71989f0ca1d020c33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                       ATTEMPT             POD ID              POD
	c616b762ce51c       busybox@sha256:1ceb872bcc68a8fcd34c97952658b58086affdcb604c90c1dee2735bde5edc2f                                              4 seconds ago        Exited              busybox                    0                   61d61b00152b1       test-local-path
	f99be92dc0f82       dd1b12fcb6097                                                                                                                7 seconds ago        Exited              hello-world-app            2                   323f8183d656c       hello-world-app-5d77478584-gmwnw
	877ad8797c15d       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              9 seconds ago        Exited              helper-pod                 0                   77b4f82c93833       helper-pod-create-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5
	c34577fd21600       nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                                                34 seconds ago       Running             nginx                      0                   b09dfb23c6608       nginx
	aa6b80f2bf0eb       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 About a minute ago   Running             gcp-auth                   0                   14b71b00b74d3       gcp-auth-d4c87556c-w2ztw
	239f7ae39f85d       af594c6a879f2                                                                                                                About a minute ago   Exited              patch                      1                   b86cd5faa4559       ingress-nginx-admission-patch-zw7fk
	2f33ab894c776       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   About a minute ago   Exited              create                     0                   bb13ea86c2689       ingress-nginx-admission-create-85hw8
	c493565ecbf74       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       2 minutes ago        Running             local-path-provisioner     0                   21bd2b3bba79e       local-path-provisioner-78b46b4d5c-vhh2j
	912069789fb85       gcr.io/cloud-spanner-emulator/emulator@sha256:9ded3fac22d4d1c85ae51473e3876e2377f5179192fea664409db0fe87e05ece               2 minutes ago        Running             cloud-spanner-emulator     0                   22c141642027a       cloud-spanner-emulator-5649c69bf6-2lhzh
	32143d71eb57a       nvcr.io/nvidia/k8s-device-plugin@sha256:339be23400f58c04f09b6ba1d4d2e0e7120648f2b114880513685b22093311f1                     2 minutes ago        Running             nvidia-device-plugin-ctr   0                   7c82667bf5f50       nvidia-device-plugin-daemonset-zwpkt
	8aa5806818e04       ba04bb24b9575                                                                                                                2 minutes ago        Running             storage-provisioner        0                   7c60e0a0b98b7       storage-provisioner
	b91e73e4b4dc2       97e04611ad434                                                                                                                2 minutes ago        Running             coredns                    0                   8a4010fd19a4c       coredns-5dd5756b68-q474z
	70041e2c50bf2       3ca3ca488cf13                                                                                                                2 minutes ago        Running             kube-proxy                 0                   1b7fbdf05ce8c       kube-proxy-xnlb6
	4f8d0f98d8089       05c284c929889                                                                                                                2 minutes ago        Running             kube-scheduler             0                   25403d0d88e4f       kube-scheduler-addons-440984
	cbd8341ea00e6       9cdd6470f48c8                                                                                                                2 minutes ago        Running             etcd                       0                   e2f6de976495f       etcd-addons-440984
	a63bf3cc679e7       04b4c447bb9d4                                                                                                                2 minutes ago        Running             kube-apiserver             0                   305e8c7f48633       kube-apiserver-addons-440984
	78ce7674ab9ea       9961cbceaf234                                                                                                                2 minutes ago        Running             kube-controller-manager    0                   8f8dccec50b24       kube-controller-manager-addons-440984
	
	* 
	* ==> coredns [b91e73e4b4dc] <==
	* [INFO] 10.244.0.18:40018 - 48305 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000089828s
	[INFO] 10.244.0.18:40018 - 47304 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051167s
	[INFO] 10.244.0.18:40018 - 13312 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00008772s
	[INFO] 10.244.0.18:40018 - 42114 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000091995s
	[INFO] 10.244.0.18:40018 - 60626 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001151801s
	[INFO] 10.244.0.18:40018 - 21560 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000939374s
	[INFO] 10.244.0.18:40018 - 9971 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000092413s
	[INFO] 10.244.0.18:48347 - 630 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000102711s
	[INFO] 10.244.0.18:60026 - 58725 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000054325s
	[INFO] 10.244.0.18:60026 - 31340 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000112367s
	[INFO] 10.244.0.18:48347 - 43221 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000074871s
	[INFO] 10.244.0.18:48347 - 50095 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068602s
	[INFO] 10.244.0.18:60026 - 5183 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000106542s
	[INFO] 10.244.0.18:60026 - 55974 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056508s
	[INFO] 10.244.0.18:48347 - 29116 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000087958s
	[INFO] 10.244.0.18:60026 - 42665 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050329s
	[INFO] 10.244.0.18:48347 - 25423 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045669s
	[INFO] 10.244.0.18:60026 - 51728 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047942s
	[INFO] 10.244.0.18:48347 - 17156 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042485s
	[INFO] 10.244.0.18:48347 - 8426 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001250172s
	[INFO] 10.244.0.18:60026 - 53215 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001634797s
	[INFO] 10.244.0.18:48347 - 19518 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001135293s
	[INFO] 10.244.0.18:60026 - 55140 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001193721s
	[INFO] 10.244.0.18:60026 - 8225 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067757s
	[INFO] 10.244.0.18:48347 - 10272 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000354341s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-440984
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-440984
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=addons-440984
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T19_00_09_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-440984
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 19:00:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-440984
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 19:02:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 19:02:43 +0000   Wed, 06 Dec 2023 19:00:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 19:02:43 +0000   Wed, 06 Dec 2023 19:00:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 19:02:43 +0000   Wed, 06 Dec 2023 19:00:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 19:02:43 +0000   Wed, 06 Dec 2023 19:00:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-440984
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 d24f2d94999249219d5d7fbf3b214ea7
	  System UUID:                84bae995-e648-4a8e-9bc8-4feb6cbe4f03
	  Boot ID:                    4d819a28-0d74-43d3-adc9-9cf72064e49e
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5649c69bf6-2lhzh                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  default                     hello-world-app-5d77478584-gmwnw                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-d4c87556c-w2ztw                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 coredns-5dd5756b68-q474z                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m35s
	  kube-system                 etcd-addons-440984                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m50s
	  kube-system                 kube-apiserver-addons-440984                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 kube-controller-manager-addons-440984                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 kube-proxy-xnlb6                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-scheduler-addons-440984                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 nvidia-device-plugin-daemonset-zwpkt                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  local-path-storage          helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         0s
	  local-path-storage          local-path-provisioner-78b46b4d5c-vhh2j                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m33s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m57s (x8 over 2m57s)  kubelet          Node addons-440984 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m57s (x8 over 2m57s)  kubelet          Node addons-440984 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m57s (x7 over 2m57s)  kubelet          Node addons-440984 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m48s                  kubelet          Node addons-440984 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m48s                  kubelet          Node addons-440984 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m48s                  kubelet          Node addons-440984 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m48s                  kubelet          Node addons-440984 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m38s                  kubelet          Node addons-440984 status is now: NodeReady
	  Normal  RegisteredNode           2m36s                  node-controller  Node addons-440984 event: Registered Node addons-440984 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001015] FS-Cache: N-cookie d=000000002ab5e6a7{9p.inode} n=00000000b75f3d40
	[  +0.001123] FS-Cache: N-key=[8] '845b3b0000000000'
	[  +0.003312] FS-Cache: Duplicate cookie detected
	[  +0.000774] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001050] FS-Cache: O-cookie d=000000002ab5e6a7{9p.inode} n=0000000048d7fd01
	[  +0.001141] FS-Cache: O-key=[8] '845b3b0000000000'
	[  +0.000843] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.001029] FS-Cache: N-cookie d=000000002ab5e6a7{9p.inode} n=00000000a73a61e5
	[  +0.001131] FS-Cache: N-key=[8] '845b3b0000000000'
	[  +3.207976] FS-Cache: Duplicate cookie detected
	[  +0.001002] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001165] FS-Cache: O-cookie d=000000002ab5e6a7{9p.inode} n=00000000e0c1de9e
	[  +0.001393] FS-Cache: O-key=[8] '835b3b0000000000'
	[  +0.001085] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001089] FS-Cache: N-cookie d=000000002ab5e6a7{9p.inode} n=000000007890ace4
	[  +0.001273] FS-Cache: N-key=[8] '835b3b0000000000'
	[  +0.432216] FS-Cache: Duplicate cookie detected
	[  +0.000747] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001013] FS-Cache: O-cookie d=000000002ab5e6a7{9p.inode} n=0000000058e4b69d
	[  +0.001148] FS-Cache: O-key=[8] '8c5b3b0000000000'
	[  +0.000741] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000973] FS-Cache: N-cookie d=000000002ab5e6a7{9p.inode} n=0000000047847aa7
	[  +0.001084] FS-Cache: N-key=[8] '8c5b3b0000000000'
	[Dec 6 17:51] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 19:00] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [cbd8341ea00e] <==
	* {"level":"info","ts":"2023-12-06T19:00:02.793695Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-06T19:00:02.793703Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-06T19:00:02.794698Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-06T19:00:02.794871Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-06T19:00:02.794895Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-06T19:00:02.79506Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-06T19:00:02.795076Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-06T19:00:03.560318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-06T19:00:03.560372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-06T19:00:03.5604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-12-06T19:00:03.560418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-12-06T19:00:03.560428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-06T19:00:03.560477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-12-06T19:00:03.560506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-06T19:00:03.568486Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-440984 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-06T19:00:03.568698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T19:00:03.568758Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T19:00:03.569931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-06T19:00:03.569931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-06T19:00:03.570038Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T19:00:03.580323Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-06T19:00:03.58449Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-06T19:00:03.584671Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T19:00:03.58488Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T19:00:03.584986Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> gcp-auth [aa6b80f2bf0e] <==
	* 2023/12/06 19:01:47 GCP Auth Webhook started!
	2023/12/06 19:01:59 Ready to marshal response ...
	2023/12/06 19:01:59 Ready to write response ...
	2023/12/06 19:02:02 Ready to marshal response ...
	2023/12/06 19:02:02 Ready to write response ...
	2023/12/06 19:02:22 Ready to marshal response ...
	2023/12/06 19:02:22 Ready to write response ...
	2023/12/06 19:02:29 Ready to marshal response ...
	2023/12/06 19:02:29 Ready to write response ...
	2023/12/06 19:02:31 Ready to marshal response ...
	2023/12/06 19:02:31 Ready to write response ...
	2023/12/06 19:02:47 Ready to marshal response ...
	2023/12/06 19:02:47 Ready to write response ...
	2023/12/06 19:02:47 Ready to marshal response ...
	2023/12/06 19:02:47 Ready to write response ...
	2023/12/06 19:02:58 Ready to marshal response ...
	2023/12/06 19:02:58 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:02:58 up  1:45,  0 users,  load average: 2.03, 2.41, 2.24
	Linux addons-440984 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [a63bf3cc679e] <==
	* W1206 19:02:17.132587       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1206 19:02:21.796469       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 19:02:22.240482       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.16.233"}
	I1206 19:02:32.172713       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.8.142"}
	E1206 19:02:39.694086       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I1206 19:02:46.239505       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 19:02:46.239549       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 19:02:46.249336       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 19:02:46.249390       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 19:02:46.288795       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 19:02:46.288835       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 19:02:46.293275       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 19:02:46.293329       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 19:02:46.308580       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 19:02:46.308634       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 19:02:46.381652       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 19:02:46.381711       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 19:02:46.426177       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 19:02:46.426250       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 19:02:46.439361       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 19:02:46.439434       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E1206 19:02:46.562279       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	W1206 19:02:47.289958       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1206 19:02:47.426282       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1206 19:02:47.490674       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [78ce7674ab9e] <==
	* W1206 19:02:48.692500       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 19:02:48.692531       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1206 19:02:49.638662       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1206 19:02:49.642101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="6.793µs"
	I1206 19:02:49.650147       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W1206 19:02:50.353934       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 19:02:50.354164       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 19:02:51.338212       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 19:02:51.338250       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 19:02:51.483290       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 19:02:51.483337       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1206 19:02:51.881055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="49.369µs"
	I1206 19:02:52.970052       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1206 19:02:52.970095       1 shared_informer.go:318] Caches are synced for resource quota
	I1206 19:02:53.465059       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1206 19:02:53.465105       1 shared_informer.go:318] Caches are synced for garbage collector
	W1206 19:02:55.506853       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 19:02:55.506890       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 19:02:56.930700       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 19:02:56.930734       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 19:02:57.179942       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 19:02:57.179977       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 19:02:58.178032       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 19:02:58.178076       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1206 19:02:58.967961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="7.827µs"
	
	* 
	* ==> kube-proxy [70041e2c50bf] <==
	* I1206 19:00:24.746624       1 server_others.go:69] "Using iptables proxy"
	I1206 19:00:24.809641       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1206 19:00:24.866690       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 19:00:24.869373       1 server_others.go:152] "Using iptables Proxier"
	I1206 19:00:24.869412       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1206 19:00:24.869421       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1206 19:00:24.869496       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 19:00:24.869758       1 server.go:846] "Version info" version="v1.28.4"
	I1206 19:00:24.869771       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 19:00:24.870799       1 config.go:188] "Starting service config controller"
	I1206 19:00:24.870843       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 19:00:24.870862       1 config.go:97] "Starting endpoint slice config controller"
	I1206 19:00:24.870867       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 19:00:24.871492       1 config.go:315] "Starting node config controller"
	I1206 19:00:24.871501       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 19:00:24.971130       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1206 19:00:24.971176       1 shared_informer.go:318] Caches are synced for service config
	I1206 19:00:24.971821       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4f8d0f98d808] <==
	* W1206 19:00:06.803497       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 19:00:06.803605       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1206 19:00:06.803785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 19:00:06.803886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1206 19:00:06.804090       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1206 19:00:06.804193       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1206 19:00:06.804392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 19:00:06.804498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1206 19:00:06.804678       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 19:00:06.804782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1206 19:00:06.805041       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 19:00:06.805156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1206 19:00:06.805357       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1206 19:00:06.805474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1206 19:00:06.805601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 19:00:06.805770       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1206 19:00:06.806672       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 19:00:06.806852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1206 19:00:07.637213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1206 19:00:07.637426       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1206 19:00:07.834557       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 19:00:07.834805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1206 19:00:07.930607       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 19:00:07.930874       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1206 19:00:10.180948       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 06 19:02:53 addons-440984 kubelet[2323]: I1206 19:02:53.098720    2323 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a4fdd9-4dab-4768-a58e-20b354ca2aa8-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f1a4fdd9-4dab-4768-a58e-20b354ca2aa8" (UID: "f1a4fdd9-4dab-4768-a58e-20b354ca2aa8"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 06 19:02:53 addons-440984 kubelet[2323]: I1206 19:02:53.194627    2323 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f1a4fdd9-4dab-4768-a58e-20b354ca2aa8-webhook-cert\") on node \"addons-440984\" DevicePath \"\""
	Dec 06 19:02:53 addons-440984 kubelet[2323]: I1206 19:02:53.194694    2323 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5nqxg\" (UniqueName: \"kubernetes.io/projected/f1a4fdd9-4dab-4768-a58e-20b354ca2aa8-kube-api-access-5nqxg\") on node \"addons-440984\" DevicePath \"\""
	Dec 06 19:02:53 addons-440984 kubelet[2323]: I1206 19:02:53.965293    2323 scope.go:117] "RemoveContainer" containerID="4fdd8888c4108341c479fc1f34cb86ba12492edb0e25c91ca01971535039bdf2"
	Dec 06 19:02:54 addons-440984 kubelet[2323]: I1206 19:02:54.049734    2323 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f1a4fdd9-4dab-4768-a58e-20b354ca2aa8" path="/var/lib/kubelet/pods/f1a4fdd9-4dab-4768-a58e-20b354ca2aa8/volumes"
	Dec 06 19:02:56 addons-440984 kubelet[2323]: I1206 19:02:56.313950    2323 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a4296a4b-627c-4486-9512-8a494c982e5b-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5\") pod \"a4296a4b-627c-4486-9512-8a494c982e5b\" (UID: \"a4296a4b-627c-4486-9512-8a494c982e5b\") "
	Dec 06 19:02:56 addons-440984 kubelet[2323]: I1206 19:02:56.314017    2323 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td5t7\" (UniqueName: \"kubernetes.io/projected/a4296a4b-627c-4486-9512-8a494c982e5b-kube-api-access-td5t7\") pod \"a4296a4b-627c-4486-9512-8a494c982e5b\" (UID: \"a4296a4b-627c-4486-9512-8a494c982e5b\") "
	Dec 06 19:02:56 addons-440984 kubelet[2323]: I1206 19:02:56.314037    2323 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4296a4b-627c-4486-9512-8a494c982e5b-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5" (OuterVolumeSpecName: "data") pod "a4296a4b-627c-4486-9512-8a494c982e5b" (UID: "a4296a4b-627c-4486-9512-8a494c982e5b"). InnerVolumeSpecName "pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 06 19:02:56 addons-440984 kubelet[2323]: I1206 19:02:56.314053    2323 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a4296a4b-627c-4486-9512-8a494c982e5b-gcp-creds\") pod \"a4296a4b-627c-4486-9512-8a494c982e5b\" (UID: \"a4296a4b-627c-4486-9512-8a494c982e5b\") "
	Dec 06 19:02:56 addons-440984 kubelet[2323]: I1206 19:02:56.314078    2323 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4296a4b-627c-4486-9512-8a494c982e5b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "a4296a4b-627c-4486-9512-8a494c982e5b" (UID: "a4296a4b-627c-4486-9512-8a494c982e5b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 06 19:02:56 addons-440984 kubelet[2323]: I1206 19:02:56.314150    2323 reconciler_common.go:300] "Volume detached for volume \"pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5\" (UniqueName: \"kubernetes.io/host-path/a4296a4b-627c-4486-9512-8a494c982e5b-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5\") on node \"addons-440984\" DevicePath \"\""
	Dec 06 19:02:56 addons-440984 kubelet[2323]: I1206 19:02:56.314167    2323 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a4296a4b-627c-4486-9512-8a494c982e5b-gcp-creds\") on node \"addons-440984\" DevicePath \"\""
	Dec 06 19:02:56 addons-440984 kubelet[2323]: I1206 19:02:56.316474    2323 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4296a4b-627c-4486-9512-8a494c982e5b-kube-api-access-td5t7" (OuterVolumeSpecName: "kube-api-access-td5t7") pod "a4296a4b-627c-4486-9512-8a494c982e5b" (UID: "a4296a4b-627c-4486-9512-8a494c982e5b"). InnerVolumeSpecName "kube-api-access-td5t7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 06 19:02:56 addons-440984 kubelet[2323]: I1206 19:02:56.414868    2323 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-td5t7\" (UniqueName: \"kubernetes.io/projected/a4296a4b-627c-4486-9512-8a494c982e5b-kube-api-access-td5t7\") on node \"addons-440984\" DevicePath \"\""
	Dec 06 19:02:57 addons-440984 kubelet[2323]: I1206 19:02:57.076264    2323 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61d61b00152b127c1b8ff3b58b2b4a56ad393681299becf71989f0ca1d020c33"
	Dec 06 19:02:58 addons-440984 kubelet[2323]: I1206 19:02:58.065664    2323 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a4296a4b-627c-4486-9512-8a494c982e5b" path="/var/lib/kubelet/pods/a4296a4b-627c-4486-9512-8a494c982e5b/volumes"
	Dec 06 19:02:58 addons-440984 kubelet[2323]: I1206 19:02:58.066017    2323 topology_manager.go:215] "Topology Admit Handler" podUID="f7753e42-cef7-40d9-9219-e0333aab0c75" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5"
	Dec 06 19:02:58 addons-440984 kubelet[2323]: E1206 19:02:58.066646    2323 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4296a4b-627c-4486-9512-8a494c982e5b" containerName="busybox"
	Dec 06 19:02:58 addons-440984 kubelet[2323]: E1206 19:02:58.066684    2323 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="daaff04f-2761-4cb5-9a70-d6c8d773456c" containerName="minikube-ingress-dns"
	Dec 06 19:02:58 addons-440984 kubelet[2323]: E1206 19:02:58.066696    2323 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="daaff04f-2761-4cb5-9a70-d6c8d773456c" containerName="minikube-ingress-dns"
	Dec 06 19:02:58 addons-440984 kubelet[2323]: I1206 19:02:58.066732    2323 memory_manager.go:346] "RemoveStaleState removing state" podUID="a4296a4b-627c-4486-9512-8a494c982e5b" containerName="busybox"
	Dec 06 19:02:58 addons-440984 kubelet[2323]: I1206 19:02:58.231490    2323 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f7753e42-cef7-40d9-9219-e0333aab0c75-gcp-creds\") pod \"helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5\" (UID: \"f7753e42-cef7-40d9-9219-e0333aab0c75\") " pod="local-path-storage/helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5"
	Dec 06 19:02:58 addons-440984 kubelet[2323]: I1206 19:02:58.231552    2323 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f7753e42-cef7-40d9-9219-e0333aab0c75-script\") pod \"helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5\" (UID: \"f7753e42-cef7-40d9-9219-e0333aab0c75\") " pod="local-path-storage/helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5"
	Dec 06 19:02:58 addons-440984 kubelet[2323]: I1206 19:02:58.231588    2323 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f7753e42-cef7-40d9-9219-e0333aab0c75-data\") pod \"helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5\" (UID: \"f7753e42-cef7-40d9-9219-e0333aab0c75\") " pod="local-path-storage/helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5"
	Dec 06 19:02:58 addons-440984 kubelet[2323]: I1206 19:02:58.231620    2323 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7d9w\" (UniqueName: \"kubernetes.io/projected/f7753e42-cef7-40d9-9219-e0333aab0c75-kube-api-access-g7d9w\") pod \"helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5\" (UID: \"f7753e42-cef7-40d9-9219-e0333aab0c75\") " pod="local-path-storage/helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5"
	
	* 
	* ==> storage-provisioner [8aa5806818e0] <==
	* I1206 19:00:31.466910       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 19:00:31.482875       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 19:00:31.483835       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 19:00:31.505749       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 19:00:31.507628       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-440984_e92f4272-2eeb-4622-930f-599f590e6d60!
	I1206 19:00:31.518067       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"51733654-b6ea-4049-827d-1f7c824b1196", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-440984_e92f4272-2eeb-4622-930f-599f590e6d60 became leader
	I1206 19:00:31.610898       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-440984_e92f4272-2eeb-4622-930f-599f590e6d60!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-440984 -n addons-440984
helpers_test.go:261: (dbg) Run:  kubectl --context addons-440984 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-440984 describe pod helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-440984 describe pod helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5: exit status 1 (271.369066ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-440984 describe pod helper-pod-delete-pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5: exit status 1
--- FAIL: TestAddons/parallel/Ingress (38.92s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (56.83s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-998555 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-998555 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.118396061s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-998555 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-998555 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9fac0a41-cbb0-4344-a091-e72a67337199] Pending
helpers_test.go:344: "nginx" [9fac0a41-cbb0-4344-a091-e72a67337199] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9fac0a41-cbb0-4344-a091-e72a67337199] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.015470633s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-998555 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-998555 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-998555 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1206 19:11:48.982633  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.0200802s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-998555 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-998555 addons disable ingress-dns --alsologtostderr -v=1: (11.572264732s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-998555 addons disable ingress --alsologtostderr -v=1
E1206 19:12:16.672705  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-998555 addons disable ingress --alsologtostderr -v=1: (7.592596853s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-998555
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-998555:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0a93bb81b37e7fbbeaa57133cbff301094544a2102d83038d9978130407ba63f",
	        "Created": "2023-12-06T19:09:37.875812337Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292473,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-06T19:09:38.228560615Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4e0f3cc6f04c458835e9edb05d52f031520d40521bc3568d81cbb7c06a79ef2",
	        "ResolvConfPath": "/var/lib/docker/containers/0a93bb81b37e7fbbeaa57133cbff301094544a2102d83038d9978130407ba63f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0a93bb81b37e7fbbeaa57133cbff301094544a2102d83038d9978130407ba63f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0a93bb81b37e7fbbeaa57133cbff301094544a2102d83038d9978130407ba63f/hosts",
	        "LogPath": "/var/lib/docker/containers/0a93bb81b37e7fbbeaa57133cbff301094544a2102d83038d9978130407ba63f/0a93bb81b37e7fbbeaa57133cbff301094544a2102d83038d9978130407ba63f-json.log",
	        "Name": "/ingress-addon-legacy-998555",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-998555:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-998555",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/876528eaeb7a53ae8d5911020bd84d693571b9c6fec65ca8b1963fefa7459285-init/diff:/var/lib/docker/overlay2/3961c608fd2e546f17711d7abfbc6ea02272979b18f6f84671d9084e2cf5bd05/diff",
	                "MergedDir": "/var/lib/docker/overlay2/876528eaeb7a53ae8d5911020bd84d693571b9c6fec65ca8b1963fefa7459285/merged",
	                "UpperDir": "/var/lib/docker/overlay2/876528eaeb7a53ae8d5911020bd84d693571b9c6fec65ca8b1963fefa7459285/diff",
	                "WorkDir": "/var/lib/docker/overlay2/876528eaeb7a53ae8d5911020bd84d693571b9c6fec65ca8b1963fefa7459285/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-998555",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-998555/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-998555",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-998555",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-998555",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b66b1c4ea42aa7e65436dd2942c055b0193933628cef4633b4f955cefc63274",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9b66b1c4ea42",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-998555": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0a93bb81b37e",
	                        "ingress-addon-legacy-998555"
	                    ],
	                    "NetworkID": "241a25e900520d314a4cab9c505b82ca6fb9a965cb99eaba55f729a9a07fbc47",
	                    "EndpointID": "f68c65daf8dff7fc052633f150dc606003dc31ea71081a03a583f735de430430",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-998555 -n ingress-addon-legacy-998555
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-998555 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-998555 logs -n 25: (1.063169424s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-796172 image ls                                               | functional-796172           | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC | 06 Dec 23 19:08 UTC |
	| image   | functional-796172 image load                                             | functional-796172           | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC | 06 Dec 23 19:08 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-796172 image ls                                               | functional-796172           | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC | 06 Dec 23 19:08 UTC |
	| image   | functional-796172 image save --daemon                                    | functional-796172           | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC | 06 Dec 23 19:08 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-796172                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-796172                                                        | functional-796172           | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC | 06 Dec 23 19:08 UTC |
	|         | image ls --format yaml                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-796172                                                        | functional-796172           | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC | 06 Dec 23 19:08 UTC |
	|         | image ls --format short                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| ssh     | functional-796172 ssh pgrep                                              | functional-796172           | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC |                     |
	|         | buildkitd                                                                |                             |         |         |                     |                     |
	| image   | functional-796172                                                        | functional-796172           | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC | 06 Dec 23 19:08 UTC |
	|         | image ls --format json                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-796172                                                        | functional-796172           | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC | 06 Dec 23 19:08 UTC |
	|         | image ls --format table                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-796172 image build -t                                         | functional-796172           | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC | 06 Dec 23 19:08 UTC |
	|         | localhost/my-image:functional-796172                                     |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                         |                             |         |         |                     |                     |
	| image   | functional-796172 image ls                                               | functional-796172           | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC | 06 Dec 23 19:08 UTC |
	| delete  | -p functional-796172                                                     | functional-796172           | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC | 06 Dec 23 19:08 UTC |
	| start   | -p image-650179                                                          | image-650179                | jenkins | v1.32.0 | 06 Dec 23 19:08 UTC | 06 Dec 23 19:09 UTC |
	|         | --driver=docker                                                          |                             |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                      | image-650179                | jenkins | v1.32.0 | 06 Dec 23 19:09 UTC | 06 Dec 23 19:09 UTC |
	|         | ./testdata/image-build/test-normal                                       |                             |         |         |                     |                     |
	|         | -p image-650179                                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                      | image-650179                | jenkins | v1.32.0 | 06 Dec 23 19:09 UTC | 06 Dec 23 19:09 UTC |
	|         | --build-opt=build-arg=ENV_A=test_env_str                                 |                             |         |         |                     |                     |
	|         | --build-opt=no-cache                                                     |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p                                       |                             |         |         |                     |                     |
	|         | image-650179                                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                      | image-650179                | jenkins | v1.32.0 | 06 Dec 23 19:09 UTC | 06 Dec 23 19:09 UTC |
	|         | ./testdata/image-build/test-normal                                       |                             |         |         |                     |                     |
	|         | --build-opt=no-cache -p                                                  |                             |         |         |                     |                     |
	|         | image-650179                                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                      | image-650179                | jenkins | v1.32.0 | 06 Dec 23 19:09 UTC | 06 Dec 23 19:09 UTC |
	|         | -f inner/Dockerfile                                                      |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-f                                            |                             |         |         |                     |                     |
	|         | -p image-650179                                                          |                             |         |         |                     |                     |
	| delete  | -p image-650179                                                          | image-650179                | jenkins | v1.32.0 | 06 Dec 23 19:09 UTC | 06 Dec 23 19:09 UTC |
	| start   | -p ingress-addon-legacy-998555                                           | ingress-addon-legacy-998555 | jenkins | v1.32.0 | 06 Dec 23 19:09 UTC | 06 Dec 23 19:11 UTC |
	|         | --kubernetes-version=v1.18.20                                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                     |                             |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-998555                                              | ingress-addon-legacy-998555 | jenkins | v1.32.0 | 06 Dec 23 19:11 UTC | 06 Dec 23 19:11 UTC |
	|         | addons enable ingress                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-998555                                              | ingress-addon-legacy-998555 | jenkins | v1.32.0 | 06 Dec 23 19:11 UTC | 06 Dec 23 19:11 UTC |
	|         | addons enable ingress-dns                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-998555                                              | ingress-addon-legacy-998555 | jenkins | v1.32.0 | 06 Dec 23 19:11 UTC | 06 Dec 23 19:11 UTC |
	|         | ssh curl -s http://127.0.0.1/                                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-998555 ip                                           | ingress-addon-legacy-998555 | jenkins | v1.32.0 | 06 Dec 23 19:11 UTC | 06 Dec 23 19:11 UTC |
	| addons  | ingress-addon-legacy-998555                                              | ingress-addon-legacy-998555 | jenkins | v1.32.0 | 06 Dec 23 19:12 UTC | 06 Dec 23 19:12 UTC |
	|         | addons disable ingress-dns                                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-998555                                              | ingress-addon-legacy-998555 | jenkins | v1.32.0 | 06 Dec 23 19:12 UTC | 06 Dec 23 19:12 UTC |
	|         | addons disable ingress                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 19:09:18
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 19:09:18.618439  292014 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:09:18.618592  292014 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:09:18.618601  292014 out.go:309] Setting ErrFile to fd 2...
	I1206 19:09:18.618607  292014 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:09:18.618874  292014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
	I1206 19:09:18.619298  292014 out.go:303] Setting JSON to false
	I1206 19:09:18.620335  292014 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6705,"bootTime":1701883054,"procs":272,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1206 19:09:18.620419  292014 start.go:138] virtualization:  
	I1206 19:09:18.623632  292014 out.go:177] * [ingress-addon-legacy-998555] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1206 19:09:18.626826  292014 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 19:09:18.626903  292014 notify.go:220] Checking for updates...
	I1206 19:09:18.629938  292014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 19:09:18.632201  292014 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	I1206 19:09:18.634467  292014 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	I1206 19:09:18.636752  292014 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 19:09:18.639261  292014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 19:09:18.641685  292014 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 19:09:18.674885  292014 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 19:09:18.675002  292014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 19:09:18.765208  292014 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-06 19:09:18.754630521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1206 19:09:18.765308  292014 docker.go:295] overlay module found
	I1206 19:09:18.767760  292014 out.go:177] * Using the docker driver based on user configuration
	I1206 19:09:18.769875  292014 start.go:298] selected driver: docker
	I1206 19:09:18.769898  292014 start.go:902] validating driver "docker" against <nil>
	I1206 19:09:18.769918  292014 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 19:09:18.770603  292014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 19:09:18.839488  292014 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-06 19:09:18.829970754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1206 19:09:18.839647  292014 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1206 19:09:18.839885  292014 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 19:09:18.842406  292014 out.go:177] * Using Docker driver with root privileges
	I1206 19:09:18.844859  292014 cni.go:84] Creating CNI manager for ""
	I1206 19:09:18.844893  292014 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1206 19:09:18.844907  292014 start_flags.go:323] config:
	{Name:ingress-addon-legacy-998555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-998555 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:09:18.848000  292014 out.go:177] * Starting control plane node ingress-addon-legacy-998555 in cluster ingress-addon-legacy-998555
	I1206 19:09:18.850382  292014 cache.go:121] Beginning downloading kic base image for docker with docker
	I1206 19:09:18.852821  292014 out.go:177] * Pulling base image ...
	I1206 19:09:18.854903  292014 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1206 19:09:18.854986  292014 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1206 19:09:18.872582  292014 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon, skipping pull
	I1206 19:09:18.872612  292014 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in daemon, skipping load
	I1206 19:09:18.930051  292014 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1206 19:09:18.930082  292014 cache.go:56] Caching tarball of preloaded images
	I1206 19:09:18.930272  292014 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1206 19:09:18.932816  292014 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1206 19:09:18.934838  292014 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1206 19:09:19.052937  292014 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1206 19:09:30.478326  292014 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1206 19:09:30.478447  292014 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1206 19:09:31.593636  292014 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1206 19:09:31.594026  292014 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/config.json ...
	I1206 19:09:31.594063  292014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/config.json: {Name:mk92f9f35532aac7a2e1713a3270671f7d458d94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:09:31.594260  292014 cache.go:194] Successfully downloaded all kic artifacts
	I1206 19:09:31.594308  292014 start.go:365] acquiring machines lock for ingress-addon-legacy-998555: {Name:mk3e14e42bb696c92df12415e3ba441a420173dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 19:09:31.594372  292014 start.go:369] acquired machines lock for "ingress-addon-legacy-998555" in 48.73µs
	I1206 19:09:31.594395  292014 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-998555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-998555 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1206 19:09:31.594467  292014 start.go:125] createHost starting for "" (driver="docker")
	I1206 19:09:31.598026  292014 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1206 19:09:31.598279  292014 start.go:159] libmachine.API.Create for "ingress-addon-legacy-998555" (driver="docker")
	I1206 19:09:31.598343  292014 client.go:168] LocalClient.Create starting
	I1206 19:09:31.598425  292014 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem
	I1206 19:09:31.598465  292014 main.go:141] libmachine: Decoding PEM data...
	I1206 19:09:31.598484  292014 main.go:141] libmachine: Parsing certificate...
	I1206 19:09:31.598537  292014 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17740-239434/.minikube/certs/cert.pem
	I1206 19:09:31.598560  292014 main.go:141] libmachine: Decoding PEM data...
	I1206 19:09:31.598576  292014 main.go:141] libmachine: Parsing certificate...
	I1206 19:09:31.598938  292014 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-998555 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 19:09:31.615574  292014 cli_runner.go:211] docker network inspect ingress-addon-legacy-998555 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 19:09:31.615650  292014 network_create.go:281] running [docker network inspect ingress-addon-legacy-998555] to gather additional debugging logs...
	I1206 19:09:31.615672  292014 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-998555
	W1206 19:09:31.632030  292014 cli_runner.go:211] docker network inspect ingress-addon-legacy-998555 returned with exit code 1
	I1206 19:09:31.632070  292014 network_create.go:284] error running [docker network inspect ingress-addon-legacy-998555]: docker network inspect ingress-addon-legacy-998555: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-998555 not found
	I1206 19:09:31.632086  292014 network_create.go:286] output of [docker network inspect ingress-addon-legacy-998555]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-998555 not found
	
	** /stderr **
	I1206 19:09:31.632203  292014 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 19:09:31.649491  292014 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000526160}
	I1206 19:09:31.649527  292014 network_create.go:124] attempt to create docker network ingress-addon-legacy-998555 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 19:09:31.649582  292014 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-998555 ingress-addon-legacy-998555
	I1206 19:09:31.731020  292014 network_create.go:108] docker network ingress-addon-legacy-998555 192.168.49.0/24 created
	I1206 19:09:31.731048  292014 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-998555" container
	I1206 19:09:31.731128  292014 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 19:09:31.748835  292014 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-998555 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-998555 --label created_by.minikube.sigs.k8s.io=true
	I1206 19:09:31.768741  292014 oci.go:103] Successfully created a docker volume ingress-addon-legacy-998555
	I1206 19:09:31.768828  292014 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-998555-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-998555 --entrypoint /usr/bin/test -v ingress-addon-legacy-998555:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib
	I1206 19:09:33.170281  292014 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-998555-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-998555 --entrypoint /usr/bin/test -v ingress-addon-legacy-998555:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib: (1.401398923s)
	I1206 19:09:33.170313  292014 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-998555
	I1206 19:09:33.170333  292014 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1206 19:09:33.170355  292014 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 19:09:33.170447  292014 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-998555:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 19:09:37.794249  292014 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-998555:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir: (4.623750326s)
	I1206 19:09:37.794282  292014 kic.go:203] duration metric: took 4.623925 seconds to extract preloaded images to volume
	W1206 19:09:37.794416  292014 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 19:09:37.794520  292014 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 19:09:37.860255  292014 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-998555 --name ingress-addon-legacy-998555 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-998555 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-998555 --network ingress-addon-legacy-998555 --ip 192.168.49.2 --volume ingress-addon-legacy-998555:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1206 19:09:38.237791  292014 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-998555 --format={{.State.Running}}
	I1206 19:09:38.275834  292014 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-998555 --format={{.State.Status}}
	I1206 19:09:38.303833  292014 cli_runner.go:164] Run: docker exec ingress-addon-legacy-998555 stat /var/lib/dpkg/alternatives/iptables
	I1206 19:09:38.377335  292014 oci.go:144] the created container "ingress-addon-legacy-998555" has a running status.
	I1206 19:09:38.377361  292014 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17740-239434/.minikube/machines/ingress-addon-legacy-998555/id_rsa...
	I1206 19:09:38.516888  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/machines/ingress-addon-legacy-998555/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1206 19:09:38.516981  292014 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17740-239434/.minikube/machines/ingress-addon-legacy-998555/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 19:09:38.538544  292014 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-998555 --format={{.State.Status}}
	I1206 19:09:38.567695  292014 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 19:09:38.567715  292014 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-998555 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 19:09:38.638783  292014 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-998555 --format={{.State.Status}}
	I1206 19:09:38.665748  292014 machine.go:88] provisioning docker machine ...
	I1206 19:09:38.665776  292014 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-998555"
	I1206 19:09:38.665841  292014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-998555
	I1206 19:09:38.690051  292014 main.go:141] libmachine: Using SSH client type: native
	I1206 19:09:38.690493  292014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 19:09:38.690508  292014 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-998555 && echo "ingress-addon-legacy-998555" | sudo tee /etc/hostname
	I1206 19:09:38.691148  292014 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50156->127.0.0.1:33093: read: connection reset by peer
	I1206 19:09:41.855079  292014 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-998555
	
	I1206 19:09:41.855167  292014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-998555
	I1206 19:09:41.874065  292014 main.go:141] libmachine: Using SSH client type: native
	I1206 19:09:41.874481  292014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 19:09:41.874507  292014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-998555' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-998555/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-998555' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 19:09:42.031876  292014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 19:09:42.031909  292014 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17740-239434/.minikube CaCertPath:/home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17740-239434/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17740-239434/.minikube}
	I1206 19:09:42.031941  292014 ubuntu.go:177] setting up certificates
	I1206 19:09:42.031956  292014 provision.go:83] configureAuth start
	I1206 19:09:42.032029  292014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-998555
	I1206 19:09:42.051658  292014 provision.go:138] copyHostCerts
	I1206 19:09:42.051716  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17740-239434/.minikube/ca.pem
	I1206 19:09:42.051751  292014 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-239434/.minikube/ca.pem, removing ...
	I1206 19:09:42.051762  292014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-239434/.minikube/ca.pem
	I1206 19:09:42.051841  292014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17740-239434/.minikube/ca.pem (1078 bytes)
	I1206 19:09:42.051925  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17740-239434/.minikube/cert.pem
	I1206 19:09:42.051948  292014 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-239434/.minikube/cert.pem, removing ...
	I1206 19:09:42.051958  292014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-239434/.minikube/cert.pem
	I1206 19:09:42.051987  292014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17740-239434/.minikube/cert.pem (1123 bytes)
	I1206 19:09:42.052033  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17740-239434/.minikube/key.pem
	I1206 19:09:42.052052  292014 exec_runner.go:144] found /home/jenkins/minikube-integration/17740-239434/.minikube/key.pem, removing ...
	I1206 19:09:42.052059  292014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17740-239434/.minikube/key.pem
	I1206 19:09:42.052083  292014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17740-239434/.minikube/key.pem (1679 bytes)
	I1206 19:09:42.052137  292014 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17740-239434/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-998555 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-998555]
	I1206 19:09:42.656347  292014 provision.go:172] copyRemoteCerts
	I1206 19:09:42.656436  292014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 19:09:42.656485  292014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-998555
	I1206 19:09:42.674317  292014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/ingress-addon-legacy-998555/id_rsa Username:docker}
	I1206 19:09:42.778883  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 19:09:42.778943  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 19:09:42.807648  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 19:09:42.807708  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1206 19:09:42.836961  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 19:09:42.837022  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 19:09:42.865222  292014 provision.go:86] duration metric: configureAuth took 833.248013ms
	I1206 19:09:42.865249  292014 ubuntu.go:193] setting minikube options for container-runtime
	I1206 19:09:42.865450  292014 config.go:182] Loaded profile config "ingress-addon-legacy-998555": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1206 19:09:42.865510  292014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-998555
	I1206 19:09:42.887920  292014 main.go:141] libmachine: Using SSH client type: native
	I1206 19:09:42.888424  292014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 19:09:42.888443  292014 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1206 19:09:43.038321  292014 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1206 19:09:43.038346  292014 ubuntu.go:71] root file system type: overlay
	I1206 19:09:43.038466  292014 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1206 19:09:43.038542  292014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-998555
	I1206 19:09:43.056820  292014 main.go:141] libmachine: Using SSH client type: native
	I1206 19:09:43.057264  292014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 19:09:43.057340  292014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1206 19:09:43.221123  292014 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1206 19:09:43.221222  292014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-998555
	I1206 19:09:43.238700  292014 main.go:141] libmachine: Using SSH client type: native
	I1206 19:09:43.239132  292014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1206 19:09:43.239157  292014 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1206 19:09:44.083063  292014 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-12-06 19:09:43.213788608 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1206 19:09:44.083110  292014 machine.go:91] provisioned docker machine in 5.417344517s
	I1206 19:09:44.083126  292014 client.go:171] LocalClient.Create took 12.484768174s
	I1206 19:09:44.083145  292014 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-998555" took 12.48486601s
	I1206 19:09:44.083159  292014 start.go:300] post-start starting for "ingress-addon-legacy-998555" (driver="docker")
	I1206 19:09:44.083169  292014 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 19:09:44.083261  292014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 19:09:44.083318  292014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-998555
	I1206 19:09:44.106886  292014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/ingress-addon-legacy-998555/id_rsa Username:docker}
	I1206 19:09:44.215567  292014 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 19:09:44.220001  292014 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 19:09:44.220042  292014 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1206 19:09:44.220054  292014 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1206 19:09:44.220063  292014 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1206 19:09:44.220074  292014 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-239434/.minikube/addons for local assets ...
	I1206 19:09:44.220139  292014 filesync.go:126] Scanning /home/jenkins/minikube-integration/17740-239434/.minikube/files for local assets ...
	I1206 19:09:44.220227  292014 filesync.go:149] local asset: /home/jenkins/minikube-integration/17740-239434/.minikube/files/etc/ssl/certs/2448142.pem -> 2448142.pem in /etc/ssl/certs
	I1206 19:09:44.220240  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/files/etc/ssl/certs/2448142.pem -> /etc/ssl/certs/2448142.pem
	I1206 19:09:44.220365  292014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 19:09:44.231462  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/files/etc/ssl/certs/2448142.pem --> /etc/ssl/certs/2448142.pem (1708 bytes)
	I1206 19:09:44.261263  292014 start.go:303] post-start completed in 178.088519ms
	I1206 19:09:44.261643  292014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-998555
	I1206 19:09:44.279781  292014 profile.go:148] Saving config to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/config.json ...
	I1206 19:09:44.280062  292014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 19:09:44.280116  292014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-998555
	I1206 19:09:44.298204  292014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/ingress-addon-legacy-998555/id_rsa Username:docker}
	I1206 19:09:44.398657  292014 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 19:09:44.404377  292014 start.go:128] duration metric: createHost completed in 12.809895953s
	I1206 19:09:44.404405  292014 start.go:83] releasing machines lock for "ingress-addon-legacy-998555", held for 12.810021046s
	I1206 19:09:44.404477  292014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-998555
	I1206 19:09:44.422157  292014 ssh_runner.go:195] Run: cat /version.json
	I1206 19:09:44.422223  292014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-998555
	I1206 19:09:44.422480  292014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 19:09:44.422546  292014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-998555
	I1206 19:09:44.443156  292014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/ingress-addon-legacy-998555/id_rsa Username:docker}
	I1206 19:09:44.443821  292014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/ingress-addon-legacy-998555/id_rsa Username:docker}
	I1206 19:09:44.677350  292014 ssh_runner.go:195] Run: systemctl --version
	I1206 19:09:44.683598  292014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 19:09:44.689208  292014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1206 19:09:44.719105  292014 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1206 19:09:44.719185  292014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1206 19:09:44.739040  292014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1206 19:09:44.759232  292014 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 19:09:44.759264  292014 start.go:475] detecting cgroup driver to use...
	I1206 19:09:44.759296  292014 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1206 19:09:44.759434  292014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:09:44.779580  292014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1206 19:09:44.791265  292014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 19:09:44.803381  292014 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 19:09:44.803465  292014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 19:09:44.815694  292014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 19:09:44.827578  292014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 19:09:44.839630  292014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 19:09:44.851626  292014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 19:09:44.863937  292014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 19:09:44.877693  292014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 19:09:44.888056  292014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 19:09:44.898186  292014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:09:44.987258  292014 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 19:09:45.165417  292014 start.go:475] detecting cgroup driver to use...
	I1206 19:09:45.165502  292014 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1206 19:09:45.165609  292014 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1206 19:09:45.192106  292014 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1206 19:09:45.192226  292014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 19:09:45.214404  292014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 19:09:45.244728  292014 ssh_runner.go:195] Run: which cri-dockerd
	I1206 19:09:45.250844  292014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1206 19:09:45.267655  292014 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1206 19:09:45.306806  292014 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1206 19:09:45.431701  292014 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1206 19:09:45.547777  292014 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1206 19:09:45.547964  292014 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1206 19:09:45.574142  292014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:09:45.676791  292014 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1206 19:09:45.952582  292014 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1206 19:09:45.979815  292014 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1206 19:09:46.012051  292014 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I1206 19:09:46.012166  292014 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-998555 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 19:09:46.031137  292014 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 19:09:46.036259  292014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:09:46.050814  292014 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1206 19:09:46.050895  292014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1206 19:09:46.072532  292014 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1206 19:09:46.072637  292014 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1206 19:09:46.072796  292014 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1206 19:09:46.085708  292014 ssh_runner.go:195] Run: which lz4
	I1206 19:09:46.090525  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1206 19:09:46.090633  292014 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 19:09:46.095334  292014 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 19:09:46.095373  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I1206 19:09:48.214477  292014 docker.go:635] Took 2.123881 seconds to copy over tarball
	I1206 19:09:48.214551  292014 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 19:09:50.672774  292014 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.458154186s)
	I1206 19:09:50.672798  292014 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 19:09:50.747339  292014 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1206 19:09:50.758821  292014 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1206 19:09:50.780890  292014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 19:09:50.881247  292014 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1206 19:09:53.547128  292014 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.665835979s)
	I1206 19:09:53.547214  292014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1206 19:09:53.569387  292014 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1206 19:09:53.569411  292014 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1206 19:09:53.569422  292014 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 19:09:53.570834  292014 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 19:09:53.571036  292014 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1206 19:09:53.571212  292014 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1206 19:09:53.571283  292014 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1206 19:09:53.571337  292014 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1206 19:09:53.571570  292014 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1206 19:09:53.571639  292014 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:09:53.571695  292014 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1206 19:09:53.571797  292014 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1206 19:09:53.572492  292014 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 19:09:53.572669  292014 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1206 19:09:53.572864  292014 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1206 19:09:53.573525  292014 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1206 19:09:53.573623  292014 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1206 19:09:53.573810  292014 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1206 19:09:53.574274  292014 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W1206 19:09:53.916601  292014 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1206 19:09:53.916790  292014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1206 19:09:53.942487  292014 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1206 19:09:53.942583  292014 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1206 19:09:53.942682  292014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1206 19:09:53.955328  292014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1206 19:09:53.963290  292014 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1206 19:09:53.963591  292014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W1206 19:09:53.970528  292014 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1206 19:09:53.970842  292014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1206 19:09:53.976001  292014 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1206 19:09:53.976263  292014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1206 19:09:53.979596  292014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-239434/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W1206 19:09:53.984196  292014 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1206 19:09:53.984457  292014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1206 19:09:53.989874  292014 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1206 19:09:53.990255  292014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1206 19:09:53.998640  292014 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1206 19:09:53.998737  292014 docker.go:323] Removing image: registry.k8s.io/pause:3.2
	I1206 19:09:53.998825  292014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1206 19:09:54.001251  292014 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1206 19:09:54.001354  292014 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1206 19:09:54.001446  292014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1206 19:09:54.048886  292014 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1206 19:09:54.049008  292014 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
	I1206 19:09:54.049093  292014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1206 19:09:54.049437  292014 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1206 19:09:54.049512  292014 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1206 19:09:54.049589  292014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1206 19:09:54.070148  292014 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1206 19:09:54.070243  292014 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 19:09:54.070328  292014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 19:09:54.092686  292014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-239434/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1206 19:09:54.092886  292014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-239434/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1206 19:09:54.093173  292014 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1206 19:09:54.093244  292014 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1206 19:09:54.093323  292014 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1206 19:09:54.111491  292014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-239434/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1206 19:09:54.111589  292014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-239434/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1206 19:09:54.128573  292014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-239434/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1206 19:09:54.133888  292014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-239434/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W1206 19:09:54.210653  292014 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1206 19:09:54.210854  292014 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:09:54.231613  292014 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1206 19:09:54.231673  292014 docker.go:323] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:09:54.231727  292014 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:09:54.265360  292014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17740-239434/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 19:09:54.265436  292014 cache_images.go:92] LoadImages completed in 696.002395ms
	W1206 19:09:54.265512  292014 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17740-239434/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I1206 19:09:54.265569  292014 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1206 19:09:54.333889  292014 cni.go:84] Creating CNI manager for ""
	I1206 19:09:54.333922  292014 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1206 19:09:54.333952  292014 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 19:09:54.333975  292014 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-998555 NodeName:ingress-addon-legacy-998555 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1206 19:09:54.334137  292014 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-998555"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 19:09:54.334206  292014 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-998555 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-998555 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 19:09:54.334277  292014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1206 19:09:54.345506  292014 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 19:09:54.345595  292014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 19:09:54.356932  292014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1206 19:09:54.379422  292014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1206 19:09:54.401223  292014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1206 19:09:54.423236  292014 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 19:09:54.427989  292014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 19:09:54.441899  292014 certs.go:56] Setting up /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555 for IP: 192.168.49.2
	I1206 19:09:54.441932  292014 certs.go:190] acquiring lock for shared ca certs: {Name:mk1262bee946068d8c620546d5b1b1b1aa594d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:09:54.442068  292014 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17740-239434/.minikube/ca.key
	I1206 19:09:54.442141  292014 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17740-239434/.minikube/proxy-client-ca.key
	I1206 19:09:54.442209  292014 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.key
	I1206 19:09:54.442225  292014 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt with IP's: []
	I1206 19:09:54.721843  292014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt ...
	I1206 19:09:54.721874  292014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: {Name:mkef19e3b8ccefadfbf56f1da1157d84b697d554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:09:54.722068  292014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.key ...
	I1206 19:09:54.722084  292014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.key: {Name:mk9c5e6592d02290d3d976fadf3a54a730f9dba9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:09:54.722183  292014 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.key.dd3b5fb2
	I1206 19:09:54.722205  292014 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1206 19:09:55.131334  292014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.crt.dd3b5fb2 ...
	I1206 19:09:55.131370  292014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.crt.dd3b5fb2: {Name:mkd4a98d69744d317cf9b44df950b68e89436ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:09:55.131564  292014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.key.dd3b5fb2 ...
	I1206 19:09:55.131579  292014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.key.dd3b5fb2: {Name:mkfdfdf4ae1de9a236654aa7736b8e5b70fb3352 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:09:55.131672  292014 certs.go:337] copying /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.crt
	I1206 19:09:55.131757  292014 certs.go:341] copying /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.key
	I1206 19:09:55.131820  292014 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/proxy-client.key
	I1206 19:09:55.131841  292014 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/proxy-client.crt with IP's: []
	I1206 19:09:55.743565  292014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/proxy-client.crt ...
	I1206 19:09:55.743598  292014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/proxy-client.crt: {Name:mk2b381572f5a6c8d906abeda79c377525ed1fec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:09:55.743787  292014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/proxy-client.key ...
	I1206 19:09:55.743803  292014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/proxy-client.key: {Name:mk931c2879604ddb28c3e597879b51f843c68957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:09:55.743883  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 19:09:55.743905  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 19:09:55.743919  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 19:09:55.743935  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 19:09:55.743947  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 19:09:55.743965  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 19:09:55.743979  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 19:09:55.743998  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 19:09:55.744063  292014 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/home/jenkins/minikube-integration/17740-239434/.minikube/certs/244814.pem (1338 bytes)
	W1206 19:09:55.744105  292014 certs.go:433] ignoring /home/jenkins/minikube-integration/17740-239434/.minikube/certs/home/jenkins/minikube-integration/17740-239434/.minikube/certs/244814_empty.pem, impossibly tiny 0 bytes
	I1206 19:09:55.744119  292014 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 19:09:55.744154  292014 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/home/jenkins/minikube-integration/17740-239434/.minikube/certs/ca.pem (1078 bytes)
	I1206 19:09:55.744187  292014 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/home/jenkins/minikube-integration/17740-239434/.minikube/certs/cert.pem (1123 bytes)
	I1206 19:09:55.744218  292014 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/home/jenkins/minikube-integration/17740-239434/.minikube/certs/key.pem (1679 bytes)
	I1206 19:09:55.744288  292014 certs.go:437] found cert: /home/jenkins/minikube-integration/17740-239434/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17740-239434/.minikube/files/etc/ssl/certs/2448142.pem (1708 bytes)
	I1206 19:09:55.744325  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/certs/244814.pem -> /usr/share/ca-certificates/244814.pem
	I1206 19:09:55.744343  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/files/etc/ssl/certs/2448142.pem -> /usr/share/ca-certificates/2448142.pem
	I1206 19:09:55.744359  292014 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17740-239434/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:09:55.744963  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 19:09:55.775531  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 19:09:55.805959  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 19:09:55.834835  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 19:09:55.864436  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 19:09:55.893397  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 19:09:55.922158  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 19:09:55.951129  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 19:09:55.981106  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/certs/244814.pem --> /usr/share/ca-certificates/244814.pem (1338 bytes)
	I1206 19:09:56.015962  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/files/etc/ssl/certs/2448142.pem --> /usr/share/ca-certificates/2448142.pem (1708 bytes)
	I1206 19:09:56.050596  292014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17740-239434/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 19:09:56.081634  292014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 19:09:56.104697  292014 ssh_runner.go:195] Run: openssl version
	I1206 19:09:56.112091  292014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 19:09:56.123943  292014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:09:56.128906  292014 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:59 /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:09:56.129022  292014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 19:09:56.137757  292014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 19:09:56.149560  292014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/244814.pem && ln -fs /usr/share/ca-certificates/244814.pem /etc/ssl/certs/244814.pem"
	I1206 19:09:56.161378  292014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/244814.pem
	I1206 19:09:56.165866  292014 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 19:04 /usr/share/ca-certificates/244814.pem
	I1206 19:09:56.165932  292014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/244814.pem
	I1206 19:09:56.174669  292014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/244814.pem /etc/ssl/certs/51391683.0"
	I1206 19:09:56.186372  292014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2448142.pem && ln -fs /usr/share/ca-certificates/2448142.pem /etc/ssl/certs/2448142.pem"
	I1206 19:09:56.198044  292014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2448142.pem
	I1206 19:09:56.202699  292014 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 19:04 /usr/share/ca-certificates/2448142.pem
	I1206 19:09:56.202778  292014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2448142.pem
	I1206 19:09:56.211727  292014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2448142.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 19:09:56.223781  292014 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 19:09:56.228349  292014 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 19:09:56.228439  292014 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-998555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-998555 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:09:56.228578  292014 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1206 19:09:56.248006  292014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 19:09:56.258911  292014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 19:09:56.269907  292014 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1206 19:09:56.269983  292014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 19:09:56.280738  292014 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 19:09:56.280797  292014 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 19:09:56.336300  292014 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1206 19:09:56.339578  292014 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 19:09:56.561473  292014 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1206 19:09:56.561550  292014 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1206 19:09:56.561604  292014 kubeadm.go:322] DOCKER_VERSION: 24.0.7
	I1206 19:09:56.561641  292014 kubeadm.go:322] OS: Linux
	I1206 19:09:56.561695  292014 kubeadm.go:322] CGROUPS_CPU: enabled
	I1206 19:09:56.561750  292014 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1206 19:09:56.561807  292014 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1206 19:09:56.561863  292014 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1206 19:09:56.561916  292014 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1206 19:09:56.561970  292014 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1206 19:09:56.651082  292014 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 19:09:56.651213  292014 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 19:09:56.651320  292014 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 19:09:56.858328  292014 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 19:09:56.859661  292014 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 19:09:56.859988  292014 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 19:09:56.960689  292014 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 19:09:56.964639  292014 out.go:204]   - Generating certificates and keys ...
	I1206 19:09:56.964740  292014 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 19:09:56.964808  292014 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 19:09:57.907922  292014 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 19:09:58.546056  292014 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1206 19:09:59.093716  292014 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1206 19:09:59.797288  292014 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1206 19:10:00.779318  292014 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1206 19:10:00.779509  292014 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-998555 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 19:10:01.463066  292014 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1206 19:10:01.463427  292014 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-998555 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 19:10:02.043079  292014 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 19:10:02.385461  292014 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 19:10:03.168349  292014 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1206 19:10:03.168670  292014 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 19:10:03.529435  292014 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 19:10:04.441983  292014 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 19:10:05.473727  292014 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 19:10:05.683196  292014 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 19:10:05.684335  292014 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 19:10:05.686990  292014 out.go:204]   - Booting up control plane ...
	I1206 19:10:05.687087  292014 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 19:10:05.692379  292014 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 19:10:05.698856  292014 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 19:10:05.701048  292014 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 19:10:05.719620  292014 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 19:10:18.222203  292014 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.503060 seconds
	I1206 19:10:18.222319  292014 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 19:10:18.237634  292014 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 19:10:18.760457  292014 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 19:10:18.760602  292014 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-998555 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1206 19:10:19.268780  292014 kubeadm.go:322] [bootstrap-token] Using token: nkyf8u.eu2jh0o1wqmjbfhg
	I1206 19:10:19.271279  292014 out.go:204]   - Configuring RBAC rules ...
	I1206 19:10:19.271423  292014 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 19:10:19.276189  292014 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 19:10:19.285895  292014 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 19:10:19.289860  292014 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 19:10:19.295983  292014 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 19:10:19.299279  292014 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 19:10:19.309731  292014 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 19:10:19.602705  292014 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 19:10:19.705897  292014 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 19:10:19.707122  292014 kubeadm.go:322] 
	I1206 19:10:19.707194  292014 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 19:10:19.707209  292014 kubeadm.go:322] 
	I1206 19:10:19.707281  292014 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 19:10:19.707291  292014 kubeadm.go:322] 
	I1206 19:10:19.707315  292014 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 19:10:19.707385  292014 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 19:10:19.707446  292014 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 19:10:19.707455  292014 kubeadm.go:322] 
	I1206 19:10:19.707504  292014 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 19:10:19.707577  292014 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 19:10:19.707647  292014 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 19:10:19.707656  292014 kubeadm.go:322] 
	I1206 19:10:19.707743  292014 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 19:10:19.707818  292014 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 19:10:19.707830  292014 kubeadm.go:322] 
	I1206 19:10:19.707909  292014 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nkyf8u.eu2jh0o1wqmjbfhg \
	I1206 19:10:19.708011  292014 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:372e7bdfa31dcfc44eafd3161d124bebbb6f7a71daed6ab3c52f0521e99d1a38 \
	I1206 19:10:19.708035  292014 kubeadm.go:322]     --control-plane 
	I1206 19:10:19.708043  292014 kubeadm.go:322] 
	I1206 19:10:19.708122  292014 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 19:10:19.708130  292014 kubeadm.go:322] 
	I1206 19:10:19.708207  292014 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nkyf8u.eu2jh0o1wqmjbfhg \
	I1206 19:10:19.708397  292014 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:372e7bdfa31dcfc44eafd3161d124bebbb6f7a71daed6ab3c52f0521e99d1a38 
	I1206 19:10:19.712260  292014 kubeadm.go:322] W1206 19:09:56.335568    1659 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1206 19:10:19.712507  292014 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1206 19:10:19.712652  292014 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1206 19:10:19.712873  292014 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1206 19:10:19.712984  292014 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 19:10:19.713118  292014 kubeadm.go:322] W1206 19:10:05.696513    1659 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1206 19:10:19.713258  292014 kubeadm.go:322] W1206 19:10:05.699233    1659 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1206 19:10:19.713271  292014 cni.go:84] Creating CNI manager for ""
	I1206 19:10:19.713285  292014 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1206 19:10:19.713313  292014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 19:10:19.713446  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:19.713524  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503 minikube.k8s.io/name=ingress-addon-legacy-998555 minikube.k8s.io/updated_at=2023_12_06T19_10_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:20.385297  292014 ops.go:34] apiserver oom_adj: -16
	I1206 19:10:20.385445  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:20.486086  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:21.085366  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:21.585187  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:22.084669  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:22.584723  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:23.084849  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:23.584709  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:24.085044  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:24.584718  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:25.085626  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:25.585289  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:26.085338  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:26.585683  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:27.084681  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:27.585568  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:28.085324  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:28.585245  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:29.084631  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:29.585622  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:30.085542  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:30.585228  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:31.084726  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:31.584702  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:32.085614  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:32.584696  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:33.084759  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:33.585369  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:34.085366  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:34.584749  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:35.085306  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:35.585336  292014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 19:10:35.684970  292014 kubeadm.go:1088] duration metric: took 15.971572944s to wait for elevateKubeSystemPrivileges.
	I1206 19:10:35.685012  292014 kubeadm.go:406] StartCluster complete in 39.456612556s
	I1206 19:10:35.685030  292014 settings.go:142] acquiring lock: {Name:mk0fc622b23c24037d6b3f8b7cae60bf03ba98b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:10:35.685103  292014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17740-239434/kubeconfig
	I1206 19:10:35.685829  292014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17740-239434/kubeconfig: {Name:mk2dc9f3d2c10f91cb0e51e097b71483e7cf911f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 19:10:35.686560  292014 kapi.go:59] client config for ingress-addon-legacy-998555: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.key", CAFile:"/home/jenkins/minikube-integration/17740-239434/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:10:35.687999  292014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 19:10:35.688341  292014 config.go:182] Loaded profile config "ingress-addon-legacy-998555": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1206 19:10:35.688406  292014 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 19:10:35.688475  292014 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-998555"
	I1206 19:10:35.688490  292014 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-998555"
	I1206 19:10:35.688530  292014 host.go:66] Checking if "ingress-addon-legacy-998555" exists ...
	I1206 19:10:35.688740  292014 cert_rotation.go:137] Starting client certificate rotation controller
	I1206 19:10:35.688777  292014 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-998555"
	I1206 19:10:35.688791  292014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-998555"
	I1206 19:10:35.688983  292014 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-998555 --format={{.State.Status}}
	I1206 19:10:35.689074  292014 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-998555 --format={{.State.Status}}
	I1206 19:10:35.726556  292014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 19:10:35.729145  292014 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:10:35.729176  292014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 19:10:35.729246  292014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-998555
	I1206 19:10:35.725637  292014 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-998555" context rescaled to 1 replicas
	I1206 19:10:35.729481  292014 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1206 19:10:35.732422  292014 out.go:177] * Verifying Kubernetes components...
	I1206 19:10:35.734998  292014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:10:35.744140  292014 kapi.go:59] client config for ingress-addon-legacy-998555: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.key", CAFile:"/home/jenkins/minikube-integration/17740-239434/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:10:35.746572  292014 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-998555"
	I1206 19:10:35.746625  292014 host.go:66] Checking if "ingress-addon-legacy-998555" exists ...
	I1206 19:10:35.747130  292014 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-998555 --format={{.State.Status}}
	I1206 19:10:35.786071  292014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/ingress-addon-legacy-998555/id_rsa Username:docker}
	I1206 19:10:35.787703  292014 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 19:10:35.787722  292014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 19:10:35.787785  292014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-998555
	I1206 19:10:35.815371  292014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/ingress-addon-legacy-998555/id_rsa Username:docker}
	I1206 19:10:35.986330  292014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 19:10:36.004695  292014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 19:10:36.029948  292014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 19:10:36.030677  292014 kapi.go:59] client config for ingress-addon-legacy-998555: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt", KeyFile:"/home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.key", CAFile:"/home/jenkins/minikube-integration/17740-239434/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 19:10:36.030996  292014 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-998555" to be "Ready" ...
	I1206 19:10:36.040157  292014 node_ready.go:49] node "ingress-addon-legacy-998555" has status "Ready":"True"
	I1206 19:10:36.040184  292014 node_ready.go:38] duration metric: took 9.163987ms waiting for node "ingress-addon-legacy-998555" to be "Ready" ...
	I1206 19:10:36.040196  292014 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:10:36.061034  292014 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace to be "Ready" ...
	I1206 19:10:36.855231  292014 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 19:10:36.863837  292014 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1206 19:10:36.866010  292014 addons.go:502] enable addons completed in 1.177616066s: enabled=[storage-provisioner default-storageclass]
	I1206 19:10:38.090035  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:10:40.091031  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:10:42.094409  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:10:44.590471  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:10:47.089730  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:10:49.589466  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:10:51.590139  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:10:53.590408  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:10:56.093881  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:10:58.589941  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:11:01.089829  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:11:03.091203  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:11:05.589388  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:11:07.589766  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:11:10.090192  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:11:12.589997  292014 pod_ready.go:102] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"False"
	I1206 19:11:13.090981  292014 pod_ready.go:92] pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace has status "Ready":"True"
	I1206 19:11:13.091002  292014 pod_ready.go:81] duration metric: took 37.02993491s waiting for pod "coredns-66bff467f8-j6rd4" in "kube-system" namespace to be "Ready" ...
	I1206 19:11:13.091014  292014 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-998555" in "kube-system" namespace to be "Ready" ...
	I1206 19:11:13.096469  292014 pod_ready.go:92] pod "etcd-ingress-addon-legacy-998555" in "kube-system" namespace has status "Ready":"True"
	I1206 19:11:13.096498  292014 pod_ready.go:81] duration metric: took 5.476136ms waiting for pod "etcd-ingress-addon-legacy-998555" in "kube-system" namespace to be "Ready" ...
	I1206 19:11:13.096514  292014 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-998555" in "kube-system" namespace to be "Ready" ...
	I1206 19:11:13.102340  292014 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-998555" in "kube-system" namespace has status "Ready":"True"
	I1206 19:11:13.102363  292014 pod_ready.go:81] duration metric: took 5.841398ms waiting for pod "kube-apiserver-ingress-addon-legacy-998555" in "kube-system" namespace to be "Ready" ...
	I1206 19:11:13.102377  292014 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-998555" in "kube-system" namespace to be "Ready" ...
	I1206 19:11:13.107423  292014 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-998555" in "kube-system" namespace has status "Ready":"True"
	I1206 19:11:13.107452  292014 pod_ready.go:81] duration metric: took 5.06627ms waiting for pod "kube-controller-manager-ingress-addon-legacy-998555" in "kube-system" namespace to be "Ready" ...
	I1206 19:11:13.107466  292014 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-46f87" in "kube-system" namespace to be "Ready" ...
	I1206 19:11:13.112785  292014 pod_ready.go:92] pod "kube-proxy-46f87" in "kube-system" namespace has status "Ready":"True"
	I1206 19:11:13.112815  292014 pod_ready.go:81] duration metric: took 5.337102ms waiting for pod "kube-proxy-46f87" in "kube-system" namespace to be "Ready" ...
	I1206 19:11:13.112827  292014 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-998555" in "kube-system" namespace to be "Ready" ...
	I1206 19:11:13.285268  292014 request.go:629] Waited for 172.32874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-998555
	I1206 19:11:13.485530  292014 request.go:629] Waited for 197.234923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-998555
	I1206 19:11:13.488414  292014 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-998555" in "kube-system" namespace has status "Ready":"True"
	I1206 19:11:13.488441  292014 pod_ready.go:81] duration metric: took 375.604886ms waiting for pod "kube-scheduler-ingress-addon-legacy-998555" in "kube-system" namespace to be "Ready" ...
	I1206 19:11:13.488452  292014 pod_ready.go:38] duration metric: took 37.448243408s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 19:11:13.488508  292014 api_server.go:52] waiting for apiserver process to appear ...
	I1206 19:11:13.488585  292014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:11:13.502924  292014 api_server.go:72] duration metric: took 37.773406789s to wait for apiserver process to appear ...
	I1206 19:11:13.502948  292014 api_server.go:88] waiting for apiserver healthz status ...
	I1206 19:11:13.502966  292014 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 19:11:13.511724  292014 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 19:11:13.512642  292014 api_server.go:141] control plane version: v1.18.20
	I1206 19:11:13.512670  292014 api_server.go:131] duration metric: took 9.713945ms to wait for apiserver health ...
	I1206 19:11:13.512680  292014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 19:11:13.685248  292014 request.go:629] Waited for 172.501447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1206 19:11:13.691051  292014 system_pods.go:59] 7 kube-system pods found
	I1206 19:11:13.691091  292014 system_pods.go:61] "coredns-66bff467f8-j6rd4" [09aa55ee-f29a-4673-858b-cd3fdeb97420] Running
	I1206 19:11:13.691099  292014 system_pods.go:61] "etcd-ingress-addon-legacy-998555" [f85ccb38-3e12-4620-866d-ab34e957284d] Running
	I1206 19:11:13.691104  292014 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-998555" [071475a1-f236-42df-b020-2fb16bb28716] Running
	I1206 19:11:13.691110  292014 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-998555" [a6986ba1-64c6-466a-b5a4-b9b72264cff8] Running
	I1206 19:11:13.691114  292014 system_pods.go:61] "kube-proxy-46f87" [45cac89b-d099-4bf4-bfe7-27a0d181c6f8] Running
	I1206 19:11:13.691120  292014 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-998555" [d6f7a3a9-fde0-4f04-a5fe-dbe5c6284faf] Running
	I1206 19:11:13.691127  292014 system_pods.go:61] "storage-provisioner" [c625a983-18f4-4fee-82fa-009db83b0171] Running
	I1206 19:11:13.691134  292014 system_pods.go:74] duration metric: took 178.447837ms to wait for pod list to return data ...
	I1206 19:11:13.691148  292014 default_sa.go:34] waiting for default service account to be created ...
	I1206 19:11:13.885425  292014 request.go:629] Waited for 194.176159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1206 19:11:13.888093  292014 default_sa.go:45] found service account: "default"
	I1206 19:11:13.888121  292014 default_sa.go:55] duration metric: took 196.963526ms for default service account to be created ...
	I1206 19:11:13.888132  292014 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 19:11:14.085592  292014 request.go:629] Waited for 197.389801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1206 19:11:14.091526  292014 system_pods.go:86] 7 kube-system pods found
	I1206 19:11:14.091561  292014 system_pods.go:89] "coredns-66bff467f8-j6rd4" [09aa55ee-f29a-4673-858b-cd3fdeb97420] Running
	I1206 19:11:14.091569  292014 system_pods.go:89] "etcd-ingress-addon-legacy-998555" [f85ccb38-3e12-4620-866d-ab34e957284d] Running
	I1206 19:11:14.091575  292014 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-998555" [071475a1-f236-42df-b020-2fb16bb28716] Running
	I1206 19:11:14.091580  292014 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-998555" [a6986ba1-64c6-466a-b5a4-b9b72264cff8] Running
	I1206 19:11:14.091585  292014 system_pods.go:89] "kube-proxy-46f87" [45cac89b-d099-4bf4-bfe7-27a0d181c6f8] Running
	I1206 19:11:14.091591  292014 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-998555" [d6f7a3a9-fde0-4f04-a5fe-dbe5c6284faf] Running
	I1206 19:11:14.091597  292014 system_pods.go:89] "storage-provisioner" [c625a983-18f4-4fee-82fa-009db83b0171] Running
	I1206 19:11:14.091637  292014 system_pods.go:126] duration metric: took 203.465281ms to wait for k8s-apps to be running ...
	I1206 19:11:14.091650  292014 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 19:11:14.091717  292014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:11:14.107163  292014 system_svc.go:56] duration metric: took 15.502832ms WaitForService to wait for kubelet.
	I1206 19:11:14.107189  292014 kubeadm.go:581] duration metric: took 38.377680554s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 19:11:14.107224  292014 node_conditions.go:102] verifying NodePressure condition ...
	I1206 19:11:14.285651  292014 request.go:629] Waited for 178.354686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1206 19:11:14.288864  292014 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1206 19:11:14.288903  292014 node_conditions.go:123] node cpu capacity is 2
	I1206 19:11:14.288922  292014 node_conditions.go:105] duration metric: took 181.692019ms to run NodePressure ...
	I1206 19:11:14.288935  292014 start.go:228] waiting for startup goroutines ...
	I1206 19:11:14.288943  292014 start.go:233] waiting for cluster config update ...
	I1206 19:11:14.288954  292014 start.go:242] writing updated cluster config ...
	I1206 19:11:14.289270  292014 ssh_runner.go:195] Run: rm -f paused
	I1206 19:11:14.360403  292014 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1206 19:11:14.362936  292014 out.go:177] 
	W1206 19:11:14.365110  292014 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1206 19:11:14.367231  292014 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1206 19:11:14.369461  292014 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-998555" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Dec 06 19:09:53 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:09:53.520559396Z" level=info msg="Daemon has completed initialization"
	Dec 06 19:09:53 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:09:53.544946961Z" level=info msg="API listen on [::]:2376"
	Dec 06 19:09:53 ingress-addon-legacy-998555 systemd[1]: Started Docker Application Container Engine.
	Dec 06 19:09:53 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:09:53.547203734Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 06 19:11:16 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:16.234195416Z" level=warning msg="reference for unknown type: " digest="sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" remote="docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"
	Dec 06 19:11:17 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:17.874898114Z" level=info msg="ignoring event" container=cfb8802d4ca09c6b9ce8dfa24d3b1c80cdd61eb64441d82588186545c4b1b135 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:11:17 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:17.913601380Z" level=info msg="ignoring event" container=97c1d3f79c1a834f530d499fd262d7bccc80eb95b16958aecd708745f8ee3b5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:11:18 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:18.306475625Z" level=info msg="ignoring event" container=7cba802182440604078530cc0c855108fc382df83eb963ff02d4e68243a43840 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:11:18 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:18.472894349Z" level=info msg="ignoring event" container=78cfc46a0f0fc858575a000a8ece1a4e096b57b58624e88d815009761133ab9e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:11:19 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:19.337227104Z" level=info msg="ignoring event" container=8dfbbd721f2498a814f2ac415fe6e6eb6e2e09077eb874acac02c3ad1df378a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:11:19 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:19.667502331Z" level=warning msg="reference for unknown type: " digest="sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324" remote="registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324"
	Dec 06 19:11:26 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:26.511633691Z" level=warning msg="Published ports are discarded when using host network mode"
	Dec 06 19:11:26 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:26.547107914Z" level=warning msg="Published ports are discarded when using host network mode"
	Dec 06 19:11:26 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:26.697847746Z" level=warning msg="reference for unknown type: " digest="sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" remote="docker.io/cryptexlabs/minikube-ingress-dns@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"
	Dec 06 19:11:33 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:33.271691758Z" level=info msg="ignoring event" container=d6cf49ebbe816beaba8fcf009fb0ae4209cf37bfb3bccc1bcb24abeeba34ec55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:11:33 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:33.594734387Z" level=info msg="ignoring event" container=0ef9b958f210be289348cc95aff0020b240d87c27b9a7b76fac073258c443e9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:11:49 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:49.542288464Z" level=info msg="ignoring event" container=a4e8225e67f6e1906d7bee0f7829d519d03e86b846b5bb50f946dee303c5622b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:11:49 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:49.837386985Z" level=info msg="ignoring event" container=6474370cbd89b6a2df24501a375dd946d1e59440cdb186bdffdf965d34f14a74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:11:50 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:11:50.815445165Z" level=info msg="ignoring event" container=39cb5e22db130d204bf03f22a959521451b4119782f4032ef1bb43b321e193d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:12:03 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:12:03.341118303Z" level=info msg="ignoring event" container=92cb1708f10415937283d5337fa4c0c5b9f992450cce40d6373e941445e4040d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:12:06 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:12:06.451740479Z" level=info msg="ignoring event" container=189cb76e723f91459ce9ee2a5ebde1addbf7ba03e8fdfae896ad832197b3a829 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:12:16 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:12:16.246762398Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=29df084f810b49714b817b964a637310a23207c5de643b38a9f0ee154ed52656
	Dec 06 19:12:16 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:12:16.268123286Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=29df084f810b49714b817b964a637310a23207c5de643b38a9f0ee154ed52656
	Dec 06 19:12:16 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:12:16.331599002Z" level=info msg="ignoring event" container=29df084f810b49714b817b964a637310a23207c5de643b38a9f0ee154ed52656 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 19:12:16 ingress-addon-legacy-998555 dockerd[1300]: time="2023-12-06T19:12:16.407295854Z" level=info msg="ignoring event" container=f60a7228627dc1115a50f9d9794f78eadc616edd2794a0c26221c2bf2561eb23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	189cb76e723f9       dd1b12fcb6097                                                                                                      15 seconds ago       Exited              hello-world-app           2                   815bcb26e3d57       hello-world-app-5f5d8b66bb-n8xw7
	51a785a957c82       nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                                      41 seconds ago       Running             nginx                     0                   49e9177cf6e32       nginx
	29df084f810b4       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   57 seconds ago       Exited              controller                0                   f60a7228627dc       ingress-nginx-controller-7fcf777cb7-vf99f
	78cfc46a0f0fc       a883f7fc35610                                                                                                      About a minute ago   Exited              patch                     1                   8dfbbd721f249       ingress-nginx-admission-patch-9jb4w
	97c1d3f79c1a8       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   7cba802182440       ingress-nginx-admission-create-t7rqq
	4044f56bb53eb       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   0a32ab3bd1ef0       storage-provisioner
	fa119a256c310       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   6b6402ee408a9       coredns-66bff467f8-j6rd4
	73bd525ac3ec0       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   3f4483e9347ee       kube-proxy-46f87
	7597c64fc4f32       ab707b0a0ea33                                                                                                      2 minutes ago        Running             etcd                      0                   ad23e5f69e9f6       etcd-ingress-addon-legacy-998555
	f728da92c1bcc       095f37015706d                                                                                                      2 minutes ago        Running             kube-scheduler            0                   f48673309c3e1       kube-scheduler-ingress-addon-legacy-998555
	2e63626b49ac0       68a4fac29a865                                                                                                      2 minutes ago        Running             kube-controller-manager   0                   08a9d894f2d37       kube-controller-manager-ingress-addon-legacy-998555
	65483c2e59549       2694cf044d665                                                                                                      2 minutes ago        Running             kube-apiserver            0                   2027f427a6352       kube-apiserver-ingress-addon-legacy-998555
	
	* 
	* ==> coredns [fa119a256c31] <==
	* [INFO] 172.17.0.1:57790 - 4712 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042527s
	[INFO] 172.17.0.1:50417 - 52274 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00002642s
	[INFO] 172.17.0.1:57790 - 53693 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004955s
	[INFO] 172.17.0.1:59633 - 56984 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098616s
	[INFO] 172.17.0.1:50417 - 24728 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000023073s
	[INFO] 172.17.0.1:32133 - 9490 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031343s
	[INFO] 172.17.0.1:57790 - 52890 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058707s
	[INFO] 172.17.0.1:50417 - 60348 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003113s
	[INFO] 172.17.0.1:59633 - 48291 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000101431s
	[INFO] 172.17.0.1:32133 - 40217 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042863s
	[INFO] 172.17.0.1:50417 - 28986 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066748s
	[INFO] 172.17.0.1:57790 - 53922 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045973s
	[INFO] 172.17.0.1:32133 - 54575 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000078818s
	[INFO] 172.17.0.1:57790 - 48959 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001544668s
	[INFO] 172.17.0.1:59633 - 1058 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002384058s
	[INFO] 172.17.0.1:32133 - 34495 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00107014s
	[INFO] 172.17.0.1:50417 - 7649 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001737987s
	[INFO] 172.17.0.1:59633 - 9355 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001632183s
	[INFO] 172.17.0.1:32133 - 4770 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001626366s
	[INFO] 172.17.0.1:50417 - 4413 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000948812s
	[INFO] 172.17.0.1:57790 - 60520 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001874051s
	[INFO] 172.17.0.1:32133 - 20834 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00011126s
	[INFO] 172.17.0.1:57790 - 39570 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005997s
	[INFO] 172.17.0.1:59633 - 38944 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000030293s
	[INFO] 172.17.0.1:50417 - 31893 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000024689s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-998555
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-998555
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=31a3600ce72029d920a55140bbc6d0705e357503
	                    minikube.k8s.io/name=ingress-addon-legacy-998555
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T19_10_19_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 19:10:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-998555
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 19:12:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 19:11:53 +0000   Wed, 06 Dec 2023 19:10:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 19:11:53 +0000   Wed, 06 Dec 2023 19:10:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 19:11:53 +0000   Wed, 06 Dec 2023 19:10:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 19:11:53 +0000   Wed, 06 Dec 2023 19:10:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-998555
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 4be8ccd143b941fb8c3b5ddd78f63cd9
	  System UUID:                330a7b5e-d3d1-4f79-adb5-4429d7a3b6ae
	  Boot ID:                    4d819a28-0d74-43d3-adc9-9cf72064e49e
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-n8xw7                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 coredns-66bff467f8-j6rd4                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     107s
	  kube-system                 etcd-ingress-addon-legacy-998555                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-apiserver-ingress-addon-legacy-998555             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-998555    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-proxy-46f87                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-scheduler-ingress-addon-legacy-998555             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (0%!)(MISSING)   170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  2m14s (x5 over 2m14s)  kubelet     Node ingress-addon-legacy-998555 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x4 over 2m14s)  kubelet     Node ingress-addon-legacy-998555 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x4 over 2m14s)  kubelet     Node ingress-addon-legacy-998555 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s                   kubelet     Node ingress-addon-legacy-998555 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s                   kubelet     Node ingress-addon-legacy-998555 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s                   kubelet     Node ingress-addon-legacy-998555 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             119s                   kubelet     Node ingress-addon-legacy-998555 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  119s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                109s                   kubelet     Node ingress-addon-legacy-998555 status is now: NodeReady
	  Normal  Starting                 105s                   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001085] FS-Cache: O-key=[8] '965d3b0000000000'
	[  +0.000776] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=000000002ab5e6a7{9p.inode} n=00000000e02334ed
	[  +0.001116] FS-Cache: N-key=[8] '965d3b0000000000'
	[  +0.025156] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001077] FS-Cache: O-cookie d=000000002ab5e6a7{9p.inode} n=000000007e02f994
	[  +0.001111] FS-Cache: O-key=[8] '965d3b0000000000'
	[  +0.000748] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001027] FS-Cache: N-cookie d=000000002ab5e6a7{9p.inode} n=0000000010910250
	[  +0.001105] FS-Cache: N-key=[8] '965d3b0000000000'
	[Dec 6 19:08] FS-Cache: Duplicate cookie detected
	[  +0.000745] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001022] FS-Cache: O-cookie d=000000002ab5e6a7{9p.inode} n=000000007890ace4
	[  +0.001170] FS-Cache: O-key=[8] '955d3b0000000000'
	[  +0.000782] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000996] FS-Cache: N-cookie d=000000002ab5e6a7{9p.inode} n=000000006fff6215
	[  +0.001145] FS-Cache: N-key=[8] '955d3b0000000000'
	[  +0.418323] FS-Cache: Duplicate cookie detected
	[  +0.000764] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001068] FS-Cache: O-cookie d=000000002ab5e6a7{9p.inode} n=00000000cee67af2
	[  +0.001125] FS-Cache: O-key=[8] '9b5d3b0000000000'
	[  +0.000747] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000983] FS-Cache: N-cookie d=000000002ab5e6a7{9p.inode} n=000000001c689cd7
	[  +0.001099] FS-Cache: N-key=[8] '9b5d3b0000000000'
	
	* 
	* ==> etcd [7597c64fc4f3] <==
	* raft2023/12/06 19:10:11 INFO: aec36adc501070cc became follower at term 0
	raft2023/12/06 19:10:11 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/06 19:10:11 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/06 19:10:11 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-06 19:10:11.359884 W | auth: simple token is not cryptographically signed
	2023-12-06 19:10:11.363743 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-06 19:10:11.365857 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/06 19:10:11 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-06 19:10:11.367229 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-12-06 19:10:11.369022 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-06 19:10:11.369372 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-06 19:10:11.369579 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/12/06 19:10:12 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/06 19:10:12 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/06 19:10:12 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/06 19:10:12 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/06 19:10:12 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-06 19:10:12.456515 I | etcdserver: published {Name:ingress-addon-legacy-998555 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-06 19:10:12.456744 I | embed: ready to serve client requests
	2023-12-06 19:10:12.458684 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-06 19:10:12.458969 I | embed: ready to serve client requests
	2023-12-06 19:10:12.460474 I | embed: serving client requests on 192.168.49.2:2379
	2023-12-06 19:10:12.484585 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-06 19:10:12.489850 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-06 19:10:12.490284 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  19:12:22 up  1:54,  0 users,  load average: 1.55, 1.93, 2.07
	Linux ingress-addon-legacy-998555 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [65483c2e5954] <==
	* I1206 19:10:16.462664       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E1206 19:10:16.497609       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1206 19:10:16.646767       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 19:10:16.646979       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1206 19:10:16.647099       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1206 19:10:16.646847       1 cache.go:39] Caches are synced for autoregister controller
	I1206 19:10:16.682815       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1206 19:10:17.435643       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1206 19:10:17.435674       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1206 19:10:17.445562       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1206 19:10:17.451072       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1206 19:10:17.451099       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1206 19:10:17.915386       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 19:10:17.963240       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1206 19:10:18.124708       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1206 19:10:18.126217       1 controller.go:609] quota admission added evaluator for: endpoints
	I1206 19:10:18.134676       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 19:10:18.893233       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1206 19:10:19.586342       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1206 19:10:19.678371       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1206 19:10:23.210463       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 19:10:35.827721       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1206 19:10:35.851845       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1206 19:11:15.300539       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1206 19:11:36.703552       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [2e63626b49ac] <==
	* I1206 19:10:35.903479       1 shared_informer.go:230] Caches are synced for disruption 
	I1206 19:10:35.903494       1 disruption.go:339] Sending events to api server.
	I1206 19:10:35.903527       1 shared_informer.go:230] Caches are synced for PVC protection 
	I1206 19:10:35.904100       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I1206 19:10:35.904125       1 shared_informer.go:230] Caches are synced for GC 
	I1206 19:10:35.904169       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1206 19:10:35.904534       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-998555", UID:"72a4804c-be87-4341-b238-2321773ba61f", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-998555 event: Registered Node ingress-addon-legacy-998555 in Controller
	I1206 19:10:35.929024       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"1750d589-7329-4827-869c-f8f38e66291a", APIVersion:"apps/v1", ResourceVersion:"325", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-j6rd4
	I1206 19:10:35.942895       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1206 19:10:35.998387       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1206 19:10:35.998406       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1206 19:10:36.006773       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1206 19:10:36.044891       1 shared_informer.go:230] Caches are synced for attach detach 
	I1206 19:10:36.045104       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1206 19:10:36.645524       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I1206 19:10:36.645568       1 shared_informer.go:230] Caches are synced for resource quota 
	I1206 19:11:15.276536       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"7136f0de-4617-490c-8058-3cdaef469504", APIVersion:"apps/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1206 19:11:15.305123       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"f3dfa960-3a00-4ef0-87a8-9d4314193b83", APIVersion:"apps/v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-vf99f
	I1206 19:11:15.363884       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"ac542057-be16-4692-a1c6-68322026494e", APIVersion:"batch/v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-t7rqq
	I1206 19:11:15.401735       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"efd4b0dc-6d79-4726-ad02-2f26f6afe754", APIVersion:"batch/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-9jb4w
	I1206 19:11:18.281203       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"ac542057-be16-4692-a1c6-68322026494e", APIVersion:"batch/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1206 19:11:19.279205       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"efd4b0dc-6d79-4726-ad02-2f26f6afe754", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1206 19:11:46.455782       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"27c2419c-50bb-4721-8c1d-3d59dc4a673d", APIVersion:"apps/v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1206 19:11:46.470305       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"63c5a233-209c-4f39-9b2a-66ed9451158b", APIVersion:"apps/v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-n8xw7
	E1206 19:12:18.925543       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-ghv5z" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [73bd525ac3ec] <==
	* W1206 19:10:37.139770       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1206 19:10:37.157702       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1206 19:10:37.157831       1 server_others.go:186] Using iptables Proxier.
	I1206 19:10:37.159205       1 server.go:583] Version: v1.18.20
	I1206 19:10:37.161907       1 config.go:133] Starting endpoints config controller
	I1206 19:10:37.161956       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1206 19:10:37.162028       1 config.go:315] Starting service config controller
	I1206 19:10:37.162040       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1206 19:10:37.262148       1 shared_informer.go:230] Caches are synced for service config 
	I1206 19:10:37.262149       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [f728da92c1bc] <==
	* I1206 19:10:16.652224       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1206 19:10:16.652441       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1206 19:10:16.654506       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1206 19:10:16.654914       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1206 19:10:16.655021       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1206 19:10:16.655109       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1206 19:10:16.666356       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 19:10:16.668509       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 19:10:16.668583       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 19:10:16.668638       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 19:10:16.680500       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 19:10:16.685060       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 19:10:16.685230       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 19:10:16.685383       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 19:10:16.685527       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 19:10:16.685679       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 19:10:16.685958       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 19:10:16.700635       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 19:10:17.522332       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 19:10:17.690717       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 19:10:17.701819       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 19:10:17.734022       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 19:10:17.760338       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 19:10:17.774172       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1206 19:10:20.159004       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Dec 06 19:12:01 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:01.304641    2899 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a4e8225e67f6e1906d7bee0f7829d519d03e86b846b5bb50f946dee303c5622b
	Dec 06 19:12:01 ingress-addon-legacy-998555 kubelet[2899]: E1206 19:12:01.305238    2899 pod_workers.go:191] Error syncing pod de133272-3fb8-415a-a5be-09823badf5b2 ("kube-ingress-dns-minikube_kube-system(de133272-3fb8-415a-a5be-09823badf5b2)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(de133272-3fb8-415a-a5be-09823badf5b2)"
	Dec 06 19:12:02 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:02.396379    2899 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-h8rc7" (UniqueName: "kubernetes.io/secret/de133272-3fb8-415a-a5be-09823badf5b2-minikube-ingress-dns-token-h8rc7") pod "de133272-3fb8-415a-a5be-09823badf5b2" (UID: "de133272-3fb8-415a-a5be-09823badf5b2")
	Dec 06 19:12:02 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:02.402928    2899 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de133272-3fb8-415a-a5be-09823badf5b2-minikube-ingress-dns-token-h8rc7" (OuterVolumeSpecName: "minikube-ingress-dns-token-h8rc7") pod "de133272-3fb8-415a-a5be-09823badf5b2" (UID: "de133272-3fb8-415a-a5be-09823badf5b2"). InnerVolumeSpecName "minikube-ingress-dns-token-h8rc7". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 06 19:12:02 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:02.498484    2899 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-h8rc7" (UniqueName: "kubernetes.io/secret/de133272-3fb8-415a-a5be-09823badf5b2-minikube-ingress-dns-token-h8rc7") on node "ingress-addon-legacy-998555" DevicePath ""
	Dec 06 19:12:03 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:03.785816    2899 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a4e8225e67f6e1906d7bee0f7829d519d03e86b846b5bb50f946dee303c5622b
	Dec 06 19:12:06 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:06.304576    2899 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 39cb5e22db130d204bf03f22a959521451b4119782f4032ef1bb43b321e193d1
	Dec 06 19:12:06 ingress-addon-legacy-998555 kubelet[2899]: W1206 19:12:06.484167    2899 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod0e1d4e19-6eec-40ec-81a1-6a4772b42a18/189cb76e723f91459ce9ee2a5ebde1addbf7ba03e8fdfae896ad832197b3a829": none of the resources are being tracked.
	Dec 06 19:12:06 ingress-addon-legacy-998555 kubelet[2899]: W1206 19:12:06.811575    2899 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-n8xw7 through plugin: invalid network status for
	Dec 06 19:12:06 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:06.817297    2899 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 39cb5e22db130d204bf03f22a959521451b4119782f4032ef1bb43b321e193d1
	Dec 06 19:12:06 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:06.817546    2899 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 189cb76e723f91459ce9ee2a5ebde1addbf7ba03e8fdfae896ad832197b3a829
	Dec 06 19:12:06 ingress-addon-legacy-998555 kubelet[2899]: E1206 19:12:06.817776    2899 pod_workers.go:191] Error syncing pod 0e1d4e19-6eec-40ec-81a1-6a4772b42a18 ("hello-world-app-5f5d8b66bb-n8xw7_default(0e1d4e19-6eec-40ec-81a1-6a4772b42a18)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-n8xw7_default(0e1d4e19-6eec-40ec-81a1-6a4772b42a18)"
	Dec 06 19:12:07 ingress-addon-legacy-998555 kubelet[2899]: W1206 19:12:07.825946    2899 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-n8xw7 through plugin: invalid network status for
	Dec 06 19:12:14 ingress-addon-legacy-998555 kubelet[2899]: E1206 19:12:14.229967    2899 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-vf99f.179e53e0d0612150", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-vf99f", UID:"1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5", APIVersion:"v1", ResourceVersion:"459", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-998555"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1544fc38d7f1550, ext:114712485157, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1544fc38d7f1550, ext:114712485157, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-vf99f.179e53e0d0612150" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 06 19:12:14 ingress-addon-legacy-998555 kubelet[2899]: E1206 19:12:14.245620    2899 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-vf99f.179e53e0d0612150", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-vf99f", UID:"1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5", APIVersion:"v1", ResourceVersion:"459", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-998555"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1544fc38d7f1550, ext:114712485157, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1544fc38e3818d9, ext:114724610221, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-vf99f.179e53e0d0612150" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 06 19:12:16 ingress-addon-legacy-998555 kubelet[2899]: W1206 19:12:16.903031    2899 pod_container_deletor.go:77] Container "f60a7228627dc1115a50f9d9794f78eadc616edd2794a0c26221c2bf2561eb23" not found in pod's containers
	Dec 06 19:12:18 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:18.342881    2899 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-sp7mf" (UniqueName: "kubernetes.io/secret/1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5-ingress-nginx-token-sp7mf") pod "1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5" (UID: "1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5")
	Dec 06 19:12:18 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:18.342933    2899 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5-webhook-cert") pod "1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5" (UID: "1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5")
	Dec 06 19:12:18 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:18.347670    2899 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5" (UID: "1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 06 19:12:18 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:18.349711    2899 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5-ingress-nginx-token-sp7mf" (OuterVolumeSpecName: "ingress-nginx-token-sp7mf") pod "1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5" (UID: "1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5"). InnerVolumeSpecName "ingress-nginx-token-sp7mf". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 06 19:12:18 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:18.443269    2899 reconciler.go:319] Volume detached for volume "ingress-nginx-token-sp7mf" (UniqueName: "kubernetes.io/secret/1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5-ingress-nginx-token-sp7mf") on node "ingress-addon-legacy-998555" DevicePath ""
	Dec 06 19:12:18 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:18.443331    2899 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5-webhook-cert") on node "ingress-addon-legacy-998555" DevicePath ""
	Dec 06 19:12:19 ingress-addon-legacy-998555 kubelet[2899]: W1206 19:12:19.324213    2899 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/1b2a3169-2bf9-42ec-9b5b-b26c27cd19c5/volumes" does not exist
	Dec 06 19:12:22 ingress-addon-legacy-998555 kubelet[2899]: I1206 19:12:22.306950    2899 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 189cb76e723f91459ce9ee2a5ebde1addbf7ba03e8fdfae896ad832197b3a829
	Dec 06 19:12:22 ingress-addon-legacy-998555 kubelet[2899]: E1206 19:12:22.307262    2899 pod_workers.go:191] Error syncing pod 0e1d4e19-6eec-40ec-81a1-6a4772b42a18 ("hello-world-app-5f5d8b66bb-n8xw7_default(0e1d4e19-6eec-40ec-81a1-6a4772b42a18)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-n8xw7_default(0e1d4e19-6eec-40ec-81a1-6a4772b42a18)"
	
	* 
	* ==> storage-provisioner [4044f56bb53e] <==
	* I1206 19:10:39.314916       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 19:10:39.329841       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 19:10:39.332945       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 19:10:39.344500       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 19:10:39.344807       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-998555_c6a11342-735e-4b9e-906e-1cc95367941b!
	I1206 19:10:39.344963       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"86f345dd-e486-46d4-9260-40e9509fb723", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-998555_c6a11342-735e-4b9e-906e-1cc95367941b became leader
	I1206 19:10:39.445777       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-998555_c6a11342-735e-4b9e-906e-1cc95367941b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-998555 -n ingress-addon-legacy-998555
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-998555 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (56.83s)

                                                
                                    
x
+
TestMissingContainerUpgrade (487.73s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.2529015267.exe start -p missing-upgrade-529407 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:322: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.2529015267.exe start -p missing-upgrade-529407 --memory=2200 --driver=docker  --container-runtime=docker: exit status 80 (55.105222969s)

                                                
                                                
-- stdout --
	* [missing-upgrade-529407] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* minikube 1.32.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.32.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Starting control plane node missing-upgrade-529407 in cluster missing-upgrade-529407
	* Pulling base image ...
	* Downloading Kubernetes v1.20.2 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW[
K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 8.19 MiB / 514.92 MiB [>__] 1.59% ? p/s ?    > preloaded-images-k8s-v8-v1....: 21.84 MiB / 514.92 MiB [>_] 4.24% ? p/s ?    > preloaded-images-k8s-v8-v1....: 34.78 MiB / 514.92 MiB [>_] 6.75% ? p/s ?    > preloaded-images-k8s-v8-v1....: 49.67 MiB / 514.92 MiB  9.65% 69.14 MiB p    > preloaded-images-k8s-v8-v1....: 62.61 MiB / 514.92 MiB  12.16% 69.14 MiB     > preloaded-images-k8s-v8-v1....: 76.29 MiB / 514.92 MiB  14.82% 69.14 MiB     > preloaded-images-k8s-v8-v1....: 88.00 MiB / 514.92 MiB  17.09% 68.80 MiB     > preloaded-images-k8s-v8-v1....: 90.02 MiB / 514.92 MiB  17.48% 68.80 MiB     > preloaded-images-k8s-v8-v1....: 104.00 MiB / 514.92 MiB  20.20% 68.80 MiB    > preloaded-images-k8s-v8-v1....: 116.57 MiB / 514.92 MiB  22.64% 67.44 MiB    > preloaded-images-k8s-v8-v1....: 130.79 MiB / 514.92 MiB  25.40% 67.44 MiB    > preloaded-images-k8s-v8-v1....: 144.72 MiB / 514.92 MiB  28.11% 67.44 MiB    > preloaded-images-k8s-v8-v1....: 155.61 MiB / 514.92 MiB  30.2
2% 67.28 MiB    > preloaded-images-k8s-v8-v1....: 170.55 MiB / 514.92 MiB  33.12% 67.28 MiB    > preloaded-images-k8s-v8-v1....: 185.32 MiB / 514.92 MiB  35.99% 67.28 MiB    > preloaded-images-k8s-v8-v1....: 200.00 MiB / 514.92 MiB  38.84% 67.71 MiB    > preloaded-images-k8s-v8-v1....: 215.29 MiB / 514.92 MiB  41.81% 67.71 MiB    > preloaded-images-k8s-v8-v1....: 229.29 MiB / 514.92 MiB  44.53% 67.71 MiB    > preloaded-images-k8s-v8-v1....: 247.22 MiB / 514.92 MiB  48.01% 68.42 MiB    > preloaded-images-k8s-v8-v1....: 264.41 MiB / 514.92 MiB  51.35% 68.42 MiB    > preloaded-images-k8s-v8-v1....: 277.77 MiB / 514.92 MiB  53.94% 68.42 MiB    > preloaded-images-k8s-v8-v1....: 291.45 MiB / 514.92 MiB  56.60% 68.77 MiB    > preloaded-images-k8s-v8-v1....: 307.86 MiB / 514.92 MiB  59.79% 68.77 MiB    > preloaded-images-k8s-v8-v1....: 318.44 MiB / 514.92 MiB  61.84% 68.77 MiB    > preloaded-images-k8s-v8-v1....: 333.66 MiB / 514.92 MiB  64.80% 68.87 MiB    > preloaded-images-k8s-v8-v1....: 349.49 MiB / 514.92 MiB  6
7.87% 68.87 MiB    > preloaded-images-k8s-v8-v1....: 364.20 MiB / 514.92 MiB  70.73% 68.87 MiB    > preloaded-images-k8s-v8-v1....: 376.00 MiB / 514.92 MiB  73.02% 68.98 MiB    > preloaded-images-k8s-v8-v1....: 379.95 MiB / 514.92 MiB  73.79% 68.98 MiB    > preloaded-images-k8s-v8-v1....: 396.06 MiB / 514.92 MiB  76.92% 68.98 MiB    > preloaded-images-k8s-v8-v1....: 411.98 MiB / 514.92 MiB  80.01% 68.38 MiB    > preloaded-images-k8s-v8-v1....: 428.02 MiB / 514.92 MiB  83.12% 68.38 MiB    > preloaded-images-k8s-v8-v1....: 441.27 MiB / 514.92 MiB  85.70% 68.38 MiB    > preloaded-images-k8s-v8-v1....: 456.19 MiB / 514.92 MiB  88.59% 68.74 MiB    > preloaded-images-k8s-v8-v1....: 471.09 MiB / 514.92 MiB  91.49% 68.74 MiB    > preloaded-images-k8s-v8-v1....: 485.22 MiB / 514.92 MiB  94.23% 68.74 MiB    > preloaded-images-k8s-v8-v1....: 501.69 MiB / 514.92 MiB  97.43% 69.20 MiB    > preloaded-images-k8s-v8-v1....: 514.92 MiB / 514.92 MiB  100.00% 69.95 MiX Exiting due to GUEST_PROVISION: Failed to start host: can't
create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.2529015267.exe start -p missing-upgrade-529407 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:322: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.2529015267.exe start -p missing-upgrade-529407 --memory=2200 --driver=docker  --container-runtime=docker: exit status 80 (3m28.183878446s)

                                                
                                                
-- stdout --
	* [missing-upgrade-529407] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-529407 in cluster missing-upgrade-529407
	* Pulling base image ...
	* docker "missing-upgrade-529407" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.2529015267.exe start -p missing-upgrade-529407 --memory=2200 --driver=docker  --container-runtime=docker
E1206 19:41:09.620628  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:41:26.193581  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.2529015267.exe start -p missing-upgrade-529407 --memory=2200 --driver=docker  --container-runtime=docker: exit status 80 (3m38.441204821s)

                                                
                                                
-- stdout --
	* [missing-upgrade-529407] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-529407 in cluster missing-upgrade-529407
	* Pulling base image ...
	* docker "missing-upgrade-529407" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:328: release start failed: exit status 80
panic.go:523: *** TestMissingContainerUpgrade FAILED at 2023-12-06 19:43:37.300143351 +0000 UTC m=+2706.210762385
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-529407
helpers_test.go:235: (dbg) docker inspect missing-upgrade-529407:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ea199bbd285fc0f566c597cf9446582c51946af59d57cf89eef80237a78358f6",
	        "Created": "2023-12-06T19:43:29.106796631Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "Address already in use",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/missing-upgrade-529407",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-529407:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-529407",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/76083fb6364043c92c931a8715f52238d3f8088c2f174c4129b445e3ca971df1-init/diff:/var/lib/docker/overlay2/b034cfeee2edbc2538493b59885720933cd59b4793577d18f1d297cb2e8abcbe/diff:/var/lib/docker/overlay2/cd6c482e863385dfbee0cf1401ee8b90b5969dd5e9e5009af1129d692fbf9416/diff:/var/lib/docker/overlay2/e8a540dcb1d111d3ec476fe4951863d16951cb51ee0d3eb541d16d5f47366563/diff:/var/lib/docker/overlay2/d47fdaf0a50b46a2993147c50a64f2cdf0e3df78c3dc44126a53e1abcbb6c0e6/diff:/var/lib/docker/overlay2/a410c31db4d8940e7a647ec51ac2ea949a48279b286e096150440a112eb23582/diff:/var/lib/docker/overlay2/0690a2d5798a9171ff27f8025ea9deb39d36a92556447f0691b4de5074a66645/diff:/var/lib/docker/overlay2/2bafde5fefe26dae2dd2a4a28bc57ee0ae26739755158427c330070b4aa46514/diff:/var/lib/docker/overlay2/76ae2a5353970cd2f32fe73dd1571b32df56e00a3bf4359aff76ab5a63e37d9e/diff:/var/lib/docker/overlay2/ae6da238c4775da52a4e4d91e85d7b9eb3c7698c716e5f5e0c3ef44b767fcc2b/diff:/var/lib/docker/overlay2/82c38b
ab0568d6ee4600dbd2946a7132b8c39be7785958059fb847c660058307/diff:/var/lib/docker/overlay2/d6d2cb58e3a7253a76a6eff39cc1d5361d98ef38eecd99f7a16c08e9114267b7/diff:/var/lib/docker/overlay2/48021bfe2eceee686c1ad9237ebf228a4a2a4f37b0d397f1bf2805bc752b5c2e/diff:/var/lib/docker/overlay2/5db62837d7fa17f4af99693b3da6ad608860cfc932ccec433d87c77306659a0e/diff:/var/lib/docker/overlay2/ef6ad3b309b60562a42c51c3694937a94eb98465499f5758392a270804d1c0b5/diff:/var/lib/docker/overlay2/dac8953906f724ae4dd03ba5a903f4294b21a4416173d0aba66d5d2fe35d4d58/diff:/var/lib/docker/overlay2/29aa2c1f6ee417b78928b76e58179e81a535ed893313e54be0cff62e6a3904bd/diff:/var/lib/docker/overlay2/17b456fb56a0891747c91a242ef013de06df586c0dc14065041e67e3a57e181c/diff:/var/lib/docker/overlay2/182d60f68506dedb02436630f17bf1c52648f5cfd1d4790b3513dd753f3d8777/diff:/var/lib/docker/overlay2/aeb93f08b4ee2e11a9425597ebdc1e5de6d02fd61b25bc030aa7c4526b91b9ec/diff:/var/lib/docker/overlay2/c4f9ba2d02b89f7fbd8d07e85fc87434191a8543e9c4f0e9123bd5e9784e6a8f/diff:/var/lib/d
ocker/overlay2/a3b0c6c237643acd3080d7d08e332f05dff0d76e10cae82bfa0ace5345a0bb0e/diff:/var/lib/docker/overlay2/3c0d84ddf5ae9d305f92354b505eaec924cb4e39384e4eb3bc59c1a178d80c67/diff:/var/lib/docker/overlay2/c6469e218f1ca455292a2c09cee26e648e4f162ae4d0dd837834fb5e641d2f3f/diff:/var/lib/docker/overlay2/ceffbfbb5328857b8a059d57a260671f4c4ceeec2e11e997ee2b05199b2fb3bc/diff:/var/lib/docker/overlay2/342286daddb2fbf5f6563e2321efc66703b181aa7e784fcaed5cb536b68c99fc/diff:/var/lib/docker/overlay2/47140547600e18d7479e17656aea512a6fcdc28730b710fc03b254924e5d37b5/diff:/var/lib/docker/overlay2/9240ef77d2e4f7fc9a109167179bc796618337460a553655a9c083348870315b/diff:/var/lib/docker/overlay2/0da6338b42f3abb3e7e03a491115e0fc30a067a3ce2ca1772facb28ed6b76543/diff:/var/lib/docker/overlay2/0857da2e85184aebe4bf5f1fe40af8bc76847cc740b1547c6656337ad01c799b/diff:/var/lib/docker/overlay2/39d21f2b43fa270d574b9bdbaa48117ea3d27ba828da9657da2122d387e4b6f6/diff:/var/lib/docker/overlay2/754a72e4bab364f4b7d5ca98d30c6f3114ef04cff5627e6f08e99ca045b
9b8d7/diff:/var/lib/docker/overlay2/175d65cdb8454dea63e7c2a0adfdfdbed8bcdcbeebef219f2f7b30842f117298/diff:/var/lib/docker/overlay2/48768ba3187df0d10ae923dc8a20f2289e471b3f39e6006566f6015bb4d98d76/diff:/var/lib/docker/overlay2/2f2a0eff3e5c8dbaf05167aae02b34c3cb5d9439be92675d4084709d9ea190c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/76083fb6364043c92c931a8715f52238d3f8088c2f174c4129b445e3ca971df1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/76083fb6364043c92c931a8715f52238d3f8088c2f174c4129b445e3ca971df1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/76083fb6364043c92c931a8715f52238d3f8088c2f174c4129b445e3ca971df1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-529407",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-529407/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-529407",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-529407",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-529407",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-529407": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.0"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ea199bbd285f",
	                        "missing-upgrade-529407"
	                    ],
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-529407 -n missing-upgrade-529407
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-529407 -n missing-upgrade-529407: exit status 7 (90.149661ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-529407" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "missing-upgrade-529407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-529407
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-529407: (1.322258014s)
--- FAIL: TestMissingContainerUpgrade (487.73s)

                                                
                                    

Test pass (300/330)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.23
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.4/json-events 16.39
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.1
17 TestDownloadOnly/v1.29.0-rc.1/json-events 14.49
18 TestDownloadOnly/v1.29.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.1/LogsDuration 0.21
23 TestDownloadOnly/DeleteAll 0.41
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.25
26 TestBinaryMirror 0.64
27 TestOffline 99.28
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
32 TestAddons/Setup 151.81
34 TestAddons/parallel/Registry 15.58
36 TestAddons/parallel/InspektorGadget 10.86
37 TestAddons/parallel/MetricsServer 5.91
40 TestAddons/parallel/CSI 57.76
41 TestAddons/parallel/Headlamp 11.34
42 TestAddons/parallel/CloudSpanner 5.61
43 TestAddons/parallel/LocalPath 54.8
44 TestAddons/parallel/NvidiaDevicePlugin 5.54
47 TestAddons/serial/GCPAuth/Namespaces 0.19
48 TestAddons/StoppedEnableDisable 11.4
49 TestCertOptions 37.08
50 TestCertExpiration 245.86
51 TestDockerFlags 42.72
52 TestForceSystemdFlag 50.63
53 TestForceSystemdEnv 43.45
59 TestErrorSpam/setup 35.93
60 TestErrorSpam/start 0.94
61 TestErrorSpam/status 1.19
62 TestErrorSpam/pause 1.47
63 TestErrorSpam/unpause 1.63
64 TestErrorSpam/stop 2.17
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 48.81
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 38.76
71 TestFunctional/serial/KubeContext 0.08
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.04
76 TestFunctional/serial/CacheCmd/cache/add_local 1.15
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
78 TestFunctional/serial/CacheCmd/cache/list 0.08
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.37
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
81 TestFunctional/serial/CacheCmd/cache/delete 0.17
82 TestFunctional/serial/MinikubeKubectlCmd 0.17
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
84 TestFunctional/serial/ExtraConfig 41.12
85 TestFunctional/serial/ComponentHealth 0.13
86 TestFunctional/serial/LogsCmd 1.49
87 TestFunctional/serial/LogsFileCmd 1.38
88 TestFunctional/serial/InvalidService 4.4
90 TestFunctional/parallel/ConfigCmd 0.61
91 TestFunctional/parallel/DashboardCmd 15.97
92 TestFunctional/parallel/DryRun 0.55
93 TestFunctional/parallel/InternationalLanguage 0.24
94 TestFunctional/parallel/StatusCmd 1.21
98 TestFunctional/parallel/ServiceCmdConnect 12.75
99 TestFunctional/parallel/AddonsCmd 0.21
100 TestFunctional/parallel/PersistentVolumeClaim 27.18
102 TestFunctional/parallel/SSHCmd 0.82
103 TestFunctional/parallel/CpCmd 1.92
105 TestFunctional/parallel/FileSync 0.32
106 TestFunctional/parallel/CertSync 2.5
110 TestFunctional/parallel/NodeLabels 0.12
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
114 TestFunctional/parallel/License 0.37
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.84
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.41
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ServiceCmd/DeployApp 7.24
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
128 TestFunctional/parallel/ProfileCmd/profile_list 0.46
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
130 TestFunctional/parallel/MountCmd/any-port 9.08
131 TestFunctional/parallel/ServiceCmd/List 0.76
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
134 TestFunctional/parallel/ServiceCmd/Format 0.46
135 TestFunctional/parallel/ServiceCmd/URL 0.6
136 TestFunctional/parallel/MountCmd/specific-port 2.61
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2.31
138 TestFunctional/parallel/Version/short 0.16
139 TestFunctional/parallel/Version/components 1.36
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
144 TestFunctional/parallel/ImageCommands/ImageBuild 2.93
145 TestFunctional/parallel/ImageCommands/Setup 2.08
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.89
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.28
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.28
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
151 TestFunctional/parallel/DockerEnv/bash 1.76
152 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.43
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.07
154 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.44
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.05
157 TestFunctional/delete_addon-resizer_images 0.09
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestImageBuild/serial/Setup 38.03
164 TestImageBuild/serial/NormalBuild 2.07
165 TestImageBuild/serial/BuildWithBuildArg 0.97
166 TestImageBuild/serial/BuildWithDockerIgnore 0.92
167 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.79
170 TestIngressAddonLegacy/StartLegacyK8sCluster 115.88
172 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.09
173 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.68
177 TestJSONOutput/start/Command 86.64
178 TestJSONOutput/start/Audit 0
180 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Command 0.69
184 TestJSONOutput/pause/Audit 0
186 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Command 0.6
190 TestJSONOutput/unpause/Audit 0
192 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/stop/Command 5.86
196 TestJSONOutput/stop/Audit 0
198 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestErrorJSONOutput 0.27
202 TestKicCustomNetwork/create_custom_network 36.1
203 TestKicCustomNetwork/use_default_bridge_network 35.68
204 TestKicExistingNetwork 38.12
205 TestKicCustomSubnet 39.03
206 TestKicStaticIP 39.42
207 TestMainNoArgs 0.08
208 TestMinikubeProfile 78.68
211 TestMountStart/serial/StartWithMountFirst 8.95
212 TestMountStart/serial/VerifyMountFirst 0.29
213 TestMountStart/serial/StartWithMountSecond 11.83
214 TestMountStart/serial/VerifyMountSecond 0.3
215 TestMountStart/serial/DeleteFirst 1.55
216 TestMountStart/serial/VerifyMountPostDelete 0.32
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 8.88
219 TestMountStart/serial/VerifyMountPostStop 0.33
222 TestMultiNode/serial/FreshStart2Nodes 82.5
223 TestMultiNode/serial/DeployApp2Nodes 37.88
224 TestMultiNode/serial/PingHostFrom2Pods 1.29
225 TestMultiNode/serial/AddNode 20.29
226 TestMultiNode/serial/MultiNodeLabels 0.09
227 TestMultiNode/serial/ProfileList 0.4
228 TestMultiNode/serial/CopyFile 12.26
229 TestMultiNode/serial/StopNode 2.49
230 TestMultiNode/serial/StartAfterStop 14.69
231 TestMultiNode/serial/RestartKeepsNodes 129.08
232 TestMultiNode/serial/DeleteNode 5.34
233 TestMultiNode/serial/StopMultiNode 21.7
234 TestMultiNode/serial/RestartMultiNode 84.47
235 TestMultiNode/serial/ValidateNameConflict 41.55
240 TestPreload 198.6
242 TestScheduledStopUnix 104.72
243 TestSkaffold 113.19
245 TestInsufficientStorage 12.87
246 TestRunningBinaryUpgrade 125.83
248 TestKubernetesUpgrade 399.61
251 TestPause/serial/Start 60.89
252 TestPause/serial/SecondStartNoReconfiguration 37
253 TestPause/serial/Pause 0.94
254 TestPause/serial/VerifyStatus 0.46
255 TestPause/serial/Unpause 0.69
256 TestPause/serial/PauseAgain 1.05
257 TestPause/serial/DeletePaused 2.61
258 TestPause/serial/VerifyDeletedResources 0.2
259 TestStoppedBinaryUpgrade/Setup 1.1
260 TestStoppedBinaryUpgrade/Upgrade 129.71
261 TestStoppedBinaryUpgrade/MinikubeLogs 2.47
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.16
271 TestNoKubernetes/serial/StartWithK8s 45.71
283 TestNoKubernetes/serial/StartWithStopK8s 18.75
284 TestNoKubernetes/serial/Start 14.33
285 TestNoKubernetes/serial/VerifyK8sNotRunning 0.67
286 TestNoKubernetes/serial/ProfileList 1.09
287 TestNoKubernetes/serial/Stop 1.34
288 TestNoKubernetes/serial/StartNoArgs 8.53
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.46
291 TestStartStop/group/old-k8s-version/serial/FirstStart 335.94
293 TestStartStop/group/no-preload/serial/FirstStart 54.93
294 TestStartStop/group/no-preload/serial/DeployApp 9.17
295 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.38
296 TestStartStop/group/no-preload/serial/Stop 11.2
297 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
298 TestStartStop/group/no-preload/serial/SecondStart 350.64
299 TestStartStop/group/old-k8s-version/serial/DeployApp 9.51
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.35
301 TestStartStop/group/old-k8s-version/serial/Stop 10.97
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.31
303 TestStartStop/group/old-k8s-version/serial/SecondStart 425.43
304 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.03
305 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
306 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
307 TestStartStop/group/no-preload/serial/Pause 3.32
309 TestStartStop/group/embed-certs/serial/FirstStart 49.52
310 TestStartStop/group/embed-certs/serial/DeployApp 9.62
311 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.28
312 TestStartStop/group/embed-certs/serial/Stop 10.95
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
314 TestStartStop/group/embed-certs/serial/SecondStart 356.63
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
316 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
317 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
318 TestStartStop/group/old-k8s-version/serial/Pause 3.47
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.4
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.55
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.25
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.96
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 328.44
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
329 TestStartStop/group/embed-certs/serial/Pause 3.45
331 TestStartStop/group/newest-cni/serial/FirstStart 48.92
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.34
334 TestStartStop/group/newest-cni/serial/Stop 5.78
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
336 TestStartStop/group/newest-cni/serial/SecondStart 32.68
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
340 TestStartStop/group/newest-cni/serial/Pause 3.46
341 TestNetworkPlugins/group/auto/Start 92.86
342 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.03
343 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
344 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
345 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.31
346 TestNetworkPlugins/group/kindnet/Start 66.34
347 TestNetworkPlugins/group/auto/KubeletFlags 0.69
348 TestNetworkPlugins/group/auto/NetCatPod 13.83
349 TestNetworkPlugins/group/auto/DNS 0.33
350 TestNetworkPlugins/group/auto/Localhost 0.3
351 TestNetworkPlugins/group/auto/HairPin 0.27
352 TestNetworkPlugins/group/calico/Start 84.34
353 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
355 TestNetworkPlugins/group/kindnet/NetCatPod 12.52
356 TestNetworkPlugins/group/kindnet/DNS 0.26
357 TestNetworkPlugins/group/kindnet/Localhost 0.25
358 TestNetworkPlugins/group/kindnet/HairPin 0.23
359 TestNetworkPlugins/group/custom-flannel/Start 69.74
360 TestNetworkPlugins/group/calico/ControllerPod 5.05
361 TestNetworkPlugins/group/calico/KubeletFlags 0.43
362 TestNetworkPlugins/group/calico/NetCatPod 12.65
363 TestNetworkPlugins/group/calico/DNS 0.32
364 TestNetworkPlugins/group/calico/Localhost 0.39
365 TestNetworkPlugins/group/calico/HairPin 0.29
366 TestNetworkPlugins/group/false/Start 60.2
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.43
369 TestNetworkPlugins/group/custom-flannel/DNS 0.27
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.26
372 TestNetworkPlugins/group/enable-default-cni/Start 57.84
373 TestNetworkPlugins/group/false/KubeletFlags 0.45
374 TestNetworkPlugins/group/false/NetCatPod 10.64
375 TestNetworkPlugins/group/false/DNS 0.33
376 TestNetworkPlugins/group/false/Localhost 0.35
377 TestNetworkPlugins/group/false/HairPin 0.24
378 TestNetworkPlugins/group/flannel/Start 65.9
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.72
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.56
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.37
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.25
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
384 TestNetworkPlugins/group/bridge/Start 91.13
385 TestNetworkPlugins/group/flannel/ControllerPod 5.04
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.47
387 TestNetworkPlugins/group/flannel/NetCatPod 10.43
388 TestNetworkPlugins/group/flannel/DNS 0.3
389 TestNetworkPlugins/group/flannel/Localhost 0.22
390 TestNetworkPlugins/group/flannel/HairPin 0.31
391 TestNetworkPlugins/group/kubenet/Start 88.07
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
393 TestNetworkPlugins/group/bridge/NetCatPod 10.44
394 TestNetworkPlugins/group/bridge/DNS 0.29
395 TestNetworkPlugins/group/bridge/Localhost 0.19
396 TestNetworkPlugins/group/bridge/HairPin 0.22
397 TestNetworkPlugins/group/kubenet/KubeletFlags 0.36
398 TestNetworkPlugins/group/kubenet/NetCatPod 10.37
399 TestNetworkPlugins/group/kubenet/DNS 0.21
400 TestNetworkPlugins/group/kubenet/Localhost 0.21
401 TestNetworkPlugins/group/kubenet/HairPin 0.19
x
+
TestDownloadOnly/v1.16.0/json-events (12.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-942237 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-942237 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.22918484s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-942237
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-942237: exit status 85 (91.676672ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-942237 | jenkins | v1.32.0 | 06 Dec 23 18:58 UTC |          |
	|         | -p download-only-942237        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:58:31
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:58:31.216337  244819 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:58:31.216524  244819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:58:31.216542  244819 out.go:309] Setting ErrFile to fd 2...
	I1206 18:58:31.216549  244819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:58:31.216956  244819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
	W1206 18:58:31.217136  244819 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17740-239434/.minikube/config/config.json: open /home/jenkins/minikube-integration/17740-239434/.minikube/config/config.json: no such file or directory
	I1206 18:58:31.217608  244819 out.go:303] Setting JSON to true
	I1206 18:58:31.218941  244819 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6058,"bootTime":1701883054,"procs":404,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1206 18:58:31.219052  244819 start.go:138] virtualization:  
	I1206 18:58:31.222470  244819 out.go:97] [download-only-942237] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1206 18:58:31.225268  244819 out.go:169] MINIKUBE_LOCATION=17740
	W1206 18:58:31.222875  244819 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball: no such file or directory
	I1206 18:58:31.222941  244819 notify.go:220] Checking for updates...
	I1206 18:58:31.229913  244819 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:58:31.232393  244819 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	I1206 18:58:31.234753  244819 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	I1206 18:58:31.237422  244819 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 18:58:31.242762  244819 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 18:58:31.243079  244819 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:58:31.266973  244819 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:58:31.267077  244819 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:58:31.351242  244819 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-12-06 18:58:31.341441081 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1206 18:58:31.351404  244819 docker.go:295] overlay module found
	I1206 18:58:31.353729  244819 out.go:97] Using the docker driver based on user configuration
	I1206 18:58:31.353761  244819 start.go:298] selected driver: docker
	I1206 18:58:31.353768  244819 start.go:902] validating driver "docker" against <nil>
	I1206 18:58:31.353864  244819 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:58:31.421630  244819 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-12-06 18:58:31.411566551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1206 18:58:31.421786  244819 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1206 18:58:31.422087  244819 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1206 18:58:31.422245  244819 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 18:58:31.424740  244819 out.go:169] Using Docker driver with root privileges
	I1206 18:58:31.427261  244819 cni.go:84] Creating CNI manager for ""
	I1206 18:58:31.427310  244819 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1206 18:58:31.427327  244819 start_flags.go:323] config:
	{Name:download-only-942237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-942237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:58:31.429666  244819 out.go:97] Starting control plane node download-only-942237 in cluster download-only-942237
	I1206 18:58:31.429690  244819 cache.go:121] Beginning downloading kic base image for docker with docker
	I1206 18:58:31.431813  244819 out.go:97] Pulling base image ...
	I1206 18:58:31.431839  244819 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1206 18:58:31.432003  244819 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1206 18:58:31.448950  244819 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1206 18:58:31.449167  244819 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1206 18:58:31.449266  244819 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1206 18:58:31.495100  244819 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1206 18:58:31.495126  244819 cache.go:56] Caching tarball of preloaded images
	I1206 18:58:31.495971  244819 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1206 18:58:31.498671  244819 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1206 18:58:31.498696  244819 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1206 18:58:31.613518  244819 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1206 18:58:37.099338  244819 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-942237"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (16.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-942237 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-942237 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker: (16.385565914s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (16.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-942237
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-942237: exit status 85 (100.034061ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-942237 | jenkins | v1.32.0 | 06 Dec 23 18:58 UTC |          |
	|         | -p download-only-942237        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-942237 | jenkins | v1.32.0 | 06 Dec 23 18:58 UTC |          |
	|         | -p download-only-942237        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:58:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:58:43.545570  244896 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:58:43.545806  244896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:58:43.545848  244896 out.go:309] Setting ErrFile to fd 2...
	I1206 18:58:43.545870  244896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:58:43.546229  244896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
	W1206 18:58:43.546452  244896 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17740-239434/.minikube/config/config.json: open /home/jenkins/minikube-integration/17740-239434/.minikube/config/config.json: no such file or directory
	I1206 18:58:43.546812  244896 out.go:303] Setting JSON to true
	I1206 18:58:43.548055  244896 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6070,"bootTime":1701883054,"procs":400,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1206 18:58:43.548173  244896 start.go:138] virtualization:  
	I1206 18:58:43.551013  244896 out.go:97] [download-only-942237] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1206 18:58:43.553933  244896 out.go:169] MINIKUBE_LOCATION=17740
	I1206 18:58:43.551357  244896 notify.go:220] Checking for updates...
	I1206 18:58:43.556897  244896 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:58:43.559436  244896 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	I1206 18:58:43.561731  244896 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	I1206 18:58:43.564125  244896 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 18:58:43.568719  244896 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 18:58:43.569317  244896 config.go:182] Loaded profile config "download-only-942237": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1206 18:58:43.569389  244896 start.go:810] api.Load failed for download-only-942237: filestore "download-only-942237": Docker machine "download-only-942237" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1206 18:58:43.569490  244896 driver.go:392] Setting default libvirt URI to qemu:///system
	W1206 18:58:43.569516  244896 start.go:810] api.Load failed for download-only-942237: filestore "download-only-942237": Docker machine "download-only-942237" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1206 18:58:43.593083  244896 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:58:43.593188  244896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:58:43.688799  244896 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-06 18:58:43.678584416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1206 18:58:43.688912  244896 docker.go:295] overlay module found
	I1206 18:58:43.691435  244896 out.go:97] Using the docker driver based on existing profile
	I1206 18:58:43.691468  244896 start.go:298] selected driver: docker
	I1206 18:58:43.691475  244896 start.go:902] validating driver "docker" against &{Name:download-only-942237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-942237 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:58:43.691661  244896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:58:43.760088  244896 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-06 18:58:43.750169742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1206 18:58:43.760671  244896 cni.go:84] Creating CNI manager for ""
	I1206 18:58:43.760700  244896 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 18:58:43.760720  244896 start_flags.go:323] config:
	{Name:download-only-942237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-942237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GP
Us:}
	I1206 18:58:43.763141  244896 out.go:97] Starting control plane node download-only-942237 in cluster download-only-942237
	I1206 18:58:43.763172  244896 cache.go:121] Beginning downloading kic base image for docker with docker
	I1206 18:58:43.765385  244896 out.go:97] Pulling base image ...
	I1206 18:58:43.765416  244896 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1206 18:58:43.765521  244896 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1206 18:58:43.782613  244896 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1206 18:58:43.782764  244896 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1206 18:58:43.782785  244896 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory, skipping pull
	I1206 18:58:43.782790  244896 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in cache, skipping pull
	I1206 18:58:43.782798  244896 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f as a tarball
	I1206 18:58:43.840208  244896 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1206 18:58:43.840234  244896 cache.go:56] Caching tarball of preloaded images
	I1206 18:58:43.841003  244896 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1206 18:58:43.843734  244896 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1206 18:58:43.843753  244896 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I1206 18:58:43.978573  244896 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1206 18:58:58.156143  244896 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I1206 18:58:58.156303  244896 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-942237"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/json-events (14.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-942237 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-942237 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (14.488826147s)
--- PASS: TestDownloadOnly/v1.29.0-rc.1/json-events (14.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-942237
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-942237: exit status 85 (212.520807ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-942237 | jenkins | v1.32.0 | 06 Dec 23 18:58 UTC |          |
	|         | -p download-only-942237           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-942237 | jenkins | v1.32.0 | 06 Dec 23 18:58 UTC |          |
	|         | -p download-only-942237           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-942237 | jenkins | v1.32.0 | 06 Dec 23 18:59 UTC |          |
	|         | -p download-only-942237           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:59:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:59:00.103059  244978 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:59:00.119578  244978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:59:00.119606  244978 out.go:309] Setting ErrFile to fd 2...
	I1206 18:59:00.119614  244978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:59:00.119934  244978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
	W1206 18:59:00.120105  244978 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17740-239434/.minikube/config/config.json: open /home/jenkins/minikube-integration/17740-239434/.minikube/config/config.json: no such file or directory
	I1206 18:59:00.120451  244978 out.go:303] Setting JSON to true
	I1206 18:59:00.121741  244978 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6087,"bootTime":1701883054,"procs":400,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1206 18:59:00.121832  244978 start.go:138] virtualization:  
	I1206 18:59:00.130826  244978 out.go:97] [download-only-942237] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1206 18:59:00.149543  244978 out.go:169] MINIKUBE_LOCATION=17740
	I1206 18:59:00.131492  244978 notify.go:220] Checking for updates...
	I1206 18:59:00.165361  244978 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:59:00.173381  244978 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	I1206 18:59:00.184781  244978 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	I1206 18:59:00.187386  244978 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 18:59:00.198891  244978 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 18:59:00.200002  244978 config.go:182] Loaded profile config "download-only-942237": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1206 18:59:00.200119  244978 start.go:810] api.Load failed for download-only-942237: filestore "download-only-942237": Docker machine "download-only-942237" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1206 18:59:00.200262  244978 driver.go:392] Setting default libvirt URI to qemu:///system
	W1206 18:59:00.200345  244978 start.go:810] api.Load failed for download-only-942237: filestore "download-only-942237": Docker machine "download-only-942237" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1206 18:59:00.297326  244978 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:59:00.297443  244978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:59:00.404084  244978 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-06 18:59:00.389095416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1206 18:59:00.404199  244978 docker.go:295] overlay module found
	I1206 18:59:00.406777  244978 out.go:97] Using the docker driver based on existing profile
	I1206 18:59:00.406820  244978 start.go:298] selected driver: docker
	I1206 18:59:00.406829  244978 start.go:902] validating driver "docker" against &{Name:download-only-942237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-942237 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:59:00.407209  244978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:59:00.484076  244978 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-06 18:59:00.473826686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1206 18:59:00.484644  244978 cni.go:84] Creating CNI manager for ""
	I1206 18:59:00.484673  244978 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 18:59:00.484688  244978 start_flags.go:323] config:
	{Name:download-only-942237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:download-only-942237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m
0s GPUs:}
	I1206 18:59:00.486933  244978 out.go:97] Starting control plane node download-only-942237 in cluster download-only-942237
	I1206 18:59:00.486979  244978 cache.go:121] Beginning downloading kic base image for docker with docker
	I1206 18:59:00.489137  244978 out.go:97] Pulling base image ...
	I1206 18:59:00.489176  244978 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1206 18:59:00.489361  244978 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1206 18:59:00.510800  244978 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1206 18:59:00.510946  244978 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1206 18:59:00.510973  244978 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory, skipping pull
	I1206 18:59:00.510983  244978 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in cache, skipping pull
	I1206 18:59:00.510991  244978 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f as a tarball
	I1206 18:59:00.569325  244978 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	I1206 18:59:00.569369  244978 cache.go:56] Caching tarball of preloaded images
	I1206 18:59:00.569545  244978 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1206 18:59:00.572165  244978 out.go:97] Downloading Kubernetes v1.29.0-rc.1 preload ...
	I1206 18:59:00.572196  244978 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 ...
	I1206 18:59:00.693684  244978 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4?checksum=md5:e6c70ba8af96149bcd57a348676cbfba -> /home/jenkins/minikube-integration/17740-239434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-942237"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.41s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-942237
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-136381 --alsologtostderr --binary-mirror http://127.0.0.1:38321 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-136381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-136381
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (99.28s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-113738 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-113738 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m36.775061312s)
helpers_test.go:175: Cleaning up "offline-docker-113738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-113738
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-113738: (2.500111365s)
--- PASS: TestOffline (99.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-440984
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-440984: exit status 85 (101.324475ms)

                                                
                                                
-- stdout --
	* Profile "addons-440984" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-440984"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-440984
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-440984: exit status 85 (83.797546ms)

                                                
                                                
-- stdout --
	* Profile "addons-440984" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-440984"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (151.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-440984 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-440984 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m31.812750936s)
--- PASS: TestAddons/Setup (151.81s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 41.429291ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-ctltk" [29445e78-d0ab-477b-aa78-a2b25b760193] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019034838s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-d4bz6" [2271c7ec-a452-465a-a1f4-4f286f7b4c6c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.019593267s
addons_test.go:339: (dbg) Run:  kubectl --context addons-440984 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-440984 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-440984 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.018420228s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-440984 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-440984 addons disable registry --alsologtostderr -v=1
addons_test.go:387: (dbg) Done: out/minikube-linux-arm64 -p addons-440984 addons disable registry --alsologtostderr -v=1: (1.131607324s)
--- PASS: TestAddons/parallel/Registry (15.58s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-29ccl" [d21dafeb-306e-45a3-ad63-b70cf8cfc09f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012988076s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-440984
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-440984: (5.849438778s)
--- PASS: TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.91s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 7.949324ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-gvs8h" [3f16f94b-9ea8-447f-bc5a-405dca598bb1] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.018172172s
addons_test.go:414: (dbg) Run:  kubectl --context addons-440984 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-440984 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 49.099111ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-440984 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-440984 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [251b4db8-cb44-47df-93e7-b90030779940] Pending
helpers_test.go:344: "task-pv-pod" [251b4db8-cb44-47df-93e7-b90030779940] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
2023/12/06 19:02:03 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:344: "task-pv-pod" [251b4db8-cb44-47df-93e7-b90030779940] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.012941498s
addons_test.go:583: (dbg) Run:  kubectl --context addons-440984 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-440984 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-440984 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-440984 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-440984 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-440984 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-440984 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-440984 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c1e9f5e9-2434-431f-9e1d-8d7b549d3363] Pending
helpers_test.go:344: "task-pv-pod-restore" [c1e9f5e9-2434-431f-9e1d-8d7b549d3363] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c1e9f5e9-2434-431f-9e1d-8d7b549d3363] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.018003801s
addons_test.go:625: (dbg) Run:  kubectl --context addons-440984 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-440984 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-440984 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-440984 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-440984 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.799124473s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-440984 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:641: (dbg) Done: out/minikube-linux-arm64 -p addons-440984 addons disable volumesnapshots --alsologtostderr -v=1: (1.017846466s)
--- PASS: TestAddons/parallel/CSI (57.76s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-440984 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-440984 --alsologtostderr -v=1: (1.309302277s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-pddgt" [346d5bb7-6fea-4348-b16e-6d175db77118] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-pddgt" [346d5bb7-6fea-4348-b16e-6d175db77118] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.02785855s
--- PASS: TestAddons/parallel/Headlamp (11.34s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-2lhzh" [871f992f-551c-48cd-92be-f783d755d527] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010709832s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-440984
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.8s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-440984 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-440984 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440984 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a4296a4b-627c-4486-9512-8a494c982e5b] Pending
helpers_test.go:344: "test-local-path" [a4296a4b-627c-4486-9512-8a494c982e5b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a4296a4b-627c-4486-9512-8a494c982e5b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a4296a4b-627c-4486-9512-8a494c982e5b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.012232883s
addons_test.go:890: (dbg) Run:  kubectl --context addons-440984 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-440984 ssh "cat /opt/local-path-provisioner/pvc-fdf4318b-a14a-4027-acac-62bd5b9dd8b5_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-440984 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-440984 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-440984 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-440984 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.412400145s)
--- PASS: TestAddons/parallel/LocalPath (54.80s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zwpkt" [40ec6200-edb6-432d-8664-84c6a52db627] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.024951588s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-440984
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-440984 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-440984 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-440984
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-440984: (11.05831822s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-440984
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-440984
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-440984
--- PASS: TestAddons/StoppedEnableDisable (11.40s)

                                                
                                    
x
+
TestCertOptions (37.08s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-985478 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-985478 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (34.070936034s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-985478 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-985478 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-985478 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-985478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-985478
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-985478: (2.217781238s)
--- PASS: TestCertOptions (37.08s)

                                                
                                    
x
+
TestCertExpiration (245.86s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-585749 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-585749 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (35.65585956s)
E1206 19:47:24.215714  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-585749 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-585749 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (27.833975185s)
helpers_test.go:175: Cleaning up "cert-expiration-585749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-585749
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-585749: (2.37123815s)
--- PASS: TestCertExpiration (245.86s)

                                                
                                    
x
+
TestDockerFlags (42.72s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-083434 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-083434 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.832778459s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-083434 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-083434 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-083434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-083434
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-083434: (2.18596849s)
--- PASS: TestDockerFlags (42.72s)

                                                
                                    
x
+
TestForceSystemdFlag (50.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-640649 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-640649 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (47.714578776s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-640649 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-640649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-640649
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-640649: (2.481945567s)
--- PASS: TestForceSystemdFlag (50.63s)

                                                
                                    
x
+
TestForceSystemdEnv (43.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-693041 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-693041 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.787700898s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-693041 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-693041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-693041
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-693041: (2.27019147s)
--- PASS: TestForceSystemdEnv (43.45s)

                                                
                                    
x
+
TestErrorSpam/setup (35.93s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-771751 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-771751 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-771751 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-771751 --driver=docker  --container-runtime=docker: (35.934200081s)
--- PASS: TestErrorSpam/setup (35.93s)

                                                
                                    
x
+
TestErrorSpam/start (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 start --dry-run
--- PASS: TestErrorSpam/start (0.94s)

                                                
                                    
x
+
TestErrorSpam/status (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 status
--- PASS: TestErrorSpam/status (1.19s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (2.17s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 stop: (1.917845766s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-771751 --log_dir /tmp/nospam-771751 stop
--- PASS: TestErrorSpam/stop (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17740-239434/.minikube/files/etc/test/nested/copy/244814/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-796172 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-796172 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (48.811832842s)
--- PASS: TestFunctional/serial/StartWithProxy (48.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.76s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-796172 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-796172 --alsologtostderr -v=8: (38.748056319s)
functional_test.go:659: soft start took 38.755217725s for "functional-796172" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.76s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-796172 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-796172 cache add registry.k8s.io/pause:3.1: (1.036534218s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-796172 cache add registry.k8s.io/pause:3.3: (1.046805116s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-796172 /tmp/TestFunctionalserialCacheCmdcacheadd_local1344134098/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 cache add minikube-local-cache-test:functional-796172
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 cache delete minikube-local-cache-test:functional-796172
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-796172
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-796172 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (393.043588ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 kubectl -- --context functional-796172 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-796172 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-796172 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 19:06:48.985389  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:06:48.992438  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:06:49.002684  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:06:49.022921  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:06:49.063180  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:06:49.143429  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:06:49.303814  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:06:49.624363  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:06:50.265235  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:06:51.545962  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:06:54.106584  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:06:59.226926  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:07:09.467715  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-796172 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.116026729s)
functional_test.go:757: restart took 41.116123196s for "functional-796172" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-796172 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-796172 logs: (1.489291248s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 logs --file /tmp/TestFunctionalserialLogsFileCmd1747582127/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-796172 logs --file /tmp/TestFunctionalserialLogsFileCmd1747582127/001/logs.txt: (1.381340056s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-796172 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-796172
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-796172: exit status 115 (609.551202ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32601 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-796172 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-796172 config get cpus: exit status 14 (79.373963ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-796172 config get cpus: exit status 14 (130.431499ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-796172 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-796172 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 284039: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-796172 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-796172 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (246.396384ms)

                                                
                                                
-- stdout --
	* [functional-796172] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 19:07:58.849695  283586 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:07:58.849932  283586 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:07:58.849963  283586 out.go:309] Setting ErrFile to fd 2...
	I1206 19:07:58.849986  283586 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:07:58.850278  283586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
	I1206 19:07:58.850802  283586 out.go:303] Setting JSON to false
	I1206 19:07:58.852178  283586 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6625,"bootTime":1701883054,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1206 19:07:58.852321  283586 start.go:138] virtualization:  
	I1206 19:07:58.856697  283586 out.go:177] * [functional-796172] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1206 19:07:58.858935  283586 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 19:07:58.860935  283586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 19:07:58.859073  283586 notify.go:220] Checking for updates...
	I1206 19:07:58.863614  283586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	I1206 19:07:58.866193  283586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	I1206 19:07:58.868203  283586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 19:07:58.870467  283586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 19:07:58.872965  283586 config.go:182] Loaded profile config "functional-796172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1206 19:07:58.873530  283586 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 19:07:58.898711  283586 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 19:07:58.898826  283586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 19:07:59.005634  283586 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-06 19:07:58.995047912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1206 19:07:59.005765  283586 docker.go:295] overlay module found
	I1206 19:07:59.008412  283586 out.go:177] * Using the docker driver based on existing profile
	I1206 19:07:59.010684  283586 start.go:298] selected driver: docker
	I1206 19:07:59.010704  283586 start.go:902] validating driver "docker" against &{Name:functional-796172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-796172 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:07:59.010808  283586 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 19:07:59.014147  283586 out.go:177] 
	W1206 19:07:59.016359  283586 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 19:07:59.018840  283586 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-796172 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-796172 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-796172 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (243.435268ms)

                                                
                                                
-- stdout --
	* [functional-796172] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 19:07:58.609437  283545 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:07:58.609573  283545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:07:58.609580  283545 out.go:309] Setting ErrFile to fd 2...
	I1206 19:07:58.609587  283545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:07:58.610498  283545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
	I1206 19:07:58.610910  283545 out.go:303] Setting JSON to false
	I1206 19:07:58.611949  283545 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6625,"bootTime":1701883054,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1206 19:07:58.612023  283545 start.go:138] virtualization:  
	I1206 19:07:58.614888  283545 out.go:177] * [functional-796172] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1206 19:07:58.617768  283545 out.go:177]   - MINIKUBE_LOCATION=17740
	I1206 19:07:58.619840  283545 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 19:07:58.617899  283545 notify.go:220] Checking for updates...
	I1206 19:07:58.624369  283545 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	I1206 19:07:58.626941  283545 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	I1206 19:07:58.629008  283545 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 19:07:58.630977  283545 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 19:07:58.633338  283545 config.go:182] Loaded profile config "functional-796172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1206 19:07:58.633938  283545 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 19:07:58.659099  283545 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 19:07:58.659219  283545 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 19:07:58.763347  283545 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-06 19:07:58.745806423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1206 19:07:58.763473  283545 docker.go:295] overlay module found
	I1206 19:07:58.767013  283545 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1206 19:07:58.769077  283545 start.go:298] selected driver: docker
	I1206 19:07:58.769099  283545 start.go:902] validating driver "docker" against &{Name:functional-796172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-796172 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 19:07:58.769216  283545 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 19:07:58.772415  283545 out.go:177] 
	W1206 19:07:58.775126  283545 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 19:07:58.777764  283545 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-796172 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-796172 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-hzfcv" [c01da1e9-7c65-4547-abcf-2095c55f35fb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-hzfcv" [c01da1e9-7c65-4547-abcf-2095c55f35fb] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.032438215s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30394
functional_test.go:1674: http://192.168.49.2:30394: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-hzfcv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30394
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [00958b88-c1b8-46cf-80d2-2547a80ce2c7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.04653412s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-796172 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-796172 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-796172 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-796172 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bf43c713-1b9d-423a-9e6c-57ee382d328d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1206 19:07:29.948236  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [bf43c713-1b9d-423a-9e6c-57ee382d328d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.021496363s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-796172 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-796172 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-796172 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2c82a7f4-123d-4a14-8e62-4b842b1b245e] Pending
helpers_test.go:344: "sp-pod" [2c82a7f4-123d-4a14-8e62-4b842b1b245e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2c82a7f4-123d-4a14-8e62-4b842b1b245e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.019223085s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-796172 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh -n functional-796172 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 cp functional-796172:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd480207697/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh -n functional-796172 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/244814/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "sudo cat /etc/test/nested/copy/244814/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/244814.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "sudo cat /etc/ssl/certs/244814.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/244814.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "sudo cat /usr/share/ca-certificates/244814.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/2448142.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "sudo cat /etc/ssl/certs/2448142.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/2448142.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "sudo cat /usr/share/ca-certificates/2448142.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-796172 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-796172 ssh "sudo systemctl is-active crio": exit status 1 (448.827471ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-796172 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-796172 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-796172 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 280693: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-796172 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-796172 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-796172 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [920784bc-dd1e-4cd1-b7b4-596c9e5a4a1c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [920784bc-dd1e-4cd1-b7b4-596c9e5a4a1c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.01727024s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-796172 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.243.50 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-796172 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-796172 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-796172 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-9rhx4" [ee5aa302-bb35-44c9-bd09-86bdcb6af6a9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-9rhx4" [ee5aa302-bb35-44c9-bd09-86bdcb6af6a9] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.018612852s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "384.795862ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "70.821992ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "361.439454ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "77.166937ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-796172 /tmp/TestFunctionalparallelMountCmdany-port3086897371/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701889672768895860" to /tmp/TestFunctionalparallelMountCmdany-port3086897371/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701889672768895860" to /tmp/TestFunctionalparallelMountCmdany-port3086897371/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701889672768895860" to /tmp/TestFunctionalparallelMountCmdany-port3086897371/001/test-1701889672768895860
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-796172 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (409.30674ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 19:07 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 19:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 19:07 test-1701889672768895860
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh cat /mount-9p/test-1701889672768895860
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-796172 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d030053b-b660-437a-ab51-445867cad48d] Pending
helpers_test.go:344: "busybox-mount" [d030053b-b660-437a-ab51-445867cad48d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d030053b-b660-437a-ab51-445867cad48d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d030053b-b660-437a-ab51-445867cad48d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.025176539s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-796172 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-796172 /tmp/TestFunctionalparallelMountCmdany-port3086897371/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 service list -o json
functional_test.go:1493: Took "606.112122ms" to run "out/minikube-linux-arm64 -p functional-796172 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32370
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32370
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-796172 /tmp/TestFunctionalparallelMountCmdspecific-port4023867296/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-796172 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (819.107396ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-796172 /tmp/TestFunctionalparallelMountCmdspecific-port4023867296/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-796172 ssh "sudo umount -f /mount-9p": exit status 1 (434.367959ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-796172 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-796172 /tmp/TestFunctionalparallelMountCmdspecific-port4023867296/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-796172 /tmp/TestFunctionalparallelMountCmdVerifyCleanup549534892/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-796172 /tmp/TestFunctionalparallelMountCmdVerifyCleanup549534892/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-796172 /tmp/TestFunctionalparallelMountCmdVerifyCleanup549534892/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-796172 ssh "findmnt -T" /mount1: (1.342274995s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-796172 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-796172 /tmp/TestFunctionalparallelMountCmdVerifyCleanup549534892/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-796172 /tmp/TestFunctionalparallelMountCmdVerifyCleanup549534892/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-796172 /tmp/TestFunctionalparallelMountCmdVerifyCleanup549534892/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 version --short
--- PASS: TestFunctional/parallel/Version/short (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-796172 version -o=json --components: (1.355621359s)
--- PASS: TestFunctional/parallel/Version/components (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-796172 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-796172
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-796172
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-796172 image ls --format short --alsologtostderr:
I1206 19:08:27.617802  286712 out.go:296] Setting OutFile to fd 1 ...
I1206 19:08:27.618033  286712 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 19:08:27.618045  286712 out.go:309] Setting ErrFile to fd 2...
I1206 19:08:27.618052  286712 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 19:08:27.618410  286712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
I1206 19:08:27.619829  286712 config.go:182] Loaded profile config "functional-796172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1206 19:08:27.619992  286712 config.go:182] Loaded profile config "functional-796172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1206 19:08:27.620744  286712 cli_runner.go:164] Run: docker container inspect functional-796172 --format={{.State.Status}}
I1206 19:08:27.641198  286712 ssh_runner.go:195] Run: systemctl --version
I1206 19:08:27.641254  286712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-796172
I1206 19:08:27.663010  286712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/functional-796172/id_rsa Username:docker}
I1206 19:08:27.770749  286712 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-796172 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/google-containers/addon-resizer      | functional-796172 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | latest            | 5628e5ea3c17f | 192MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/minikube-local-cache-test | functional-796172 | 2570ef1308364 | 30B    |
| docker.io/library/nginx                     | alpine            | f09fc93534f6a | 43.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-796172 image ls --format table --alsologtostderr:
I1206 19:08:28.260011  286838 out.go:296] Setting OutFile to fd 1 ...
I1206 19:08:28.260243  286838 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 19:08:28.260250  286838 out.go:309] Setting ErrFile to fd 2...
I1206 19:08:28.260257  286838 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 19:08:28.260716  286838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
I1206 19:08:28.261464  286838 config.go:182] Loaded profile config "functional-796172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1206 19:08:28.261621  286838 config.go:182] Loaded profile config "functional-796172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1206 19:08:28.262205  286838 cli_runner.go:164] Run: docker container inspect functional-796172 --format={{.State.Status}}
I1206 19:08:28.286828  286838 ssh_runner.go:195] Run: systemctl --version
I1206 19:08:28.286886  286838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-796172
I1206 19:08:28.308227  286838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/functional-796172/id_rsa Username:docker}
I1206 19:08:28.410921  286838 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-796172 image ls --format json --alsologtostderr:
[{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"2570ef1308364be5888e391a43151ae03ade84934c1d1026fc360eae6b7ddaa2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-796172"],"size":"30"},{"id":"f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb
29c19768dca8fd3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-796172"],"size":"32900000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.i
o/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u0
03e"],"size":"244000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-796172 image ls --format json --alsologtostderr:
I1206 19:08:27.913752  286769 out.go:296] Setting OutFile to fd 1 ...
I1206 19:08:27.915031  286769 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 19:08:27.915043  286769 out.go:309] Setting ErrFile to fd 2...
I1206 19:08:27.915050  286769 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 19:08:27.923506  286769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
I1206 19:08:27.925723  286769 config.go:182] Loaded profile config "functional-796172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1206 19:08:27.925936  286769 config.go:182] Loaded profile config "functional-796172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1206 19:08:27.932472  286769 cli_runner.go:164] Run: docker container inspect functional-796172 --format={{.State.Status}}
I1206 19:08:27.966486  286769 ssh_runner.go:195] Run: systemctl --version
I1206 19:08:27.966539  286769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-796172
I1206 19:08:27.999242  286769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/functional-796172/id_rsa Username:docker}
I1206 19:08:28.118949  286769 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-796172 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: 5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 2570ef1308364be5888e391a43151ae03ade84934c1d1026fc360eae6b7ddaa2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-796172
size: "30"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-796172
size: "32900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-796172 image ls --format yaml --alsologtostderr:
I1206 19:08:27.570366  286711 out.go:296] Setting OutFile to fd 1 ...
I1206 19:08:27.570610  286711 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 19:08:27.570642  286711 out.go:309] Setting ErrFile to fd 2...
I1206 19:08:27.570665  286711 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 19:08:27.570976  286711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
I1206 19:08:27.571714  286711 config.go:182] Loaded profile config "functional-796172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1206 19:08:27.571910  286711 config.go:182] Loaded profile config "functional-796172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1206 19:08:27.572950  286711 cli_runner.go:164] Run: docker container inspect functional-796172 --format={{.State.Status}}
I1206 19:08:27.622776  286711 ssh_runner.go:195] Run: systemctl --version
I1206 19:08:27.622837  286711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-796172
I1206 19:08:27.655303  286711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/functional-796172/id_rsa Username:docker}
I1206 19:08:27.758503  286711 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-796172 ssh pgrep buildkitd: exit status 1 (457.243291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image build -t localhost/my-image:functional-796172 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-796172 image build -t localhost/my-image:functional-796172 testdata/build --alsologtostderr: (2.217615811s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-796172 image build -t localhost/my-image:functional-796172 testdata/build --alsologtostderr:
I1206 19:08:28.351387  286845 out.go:296] Setting OutFile to fd 1 ...
I1206 19:08:28.351962  286845 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 19:08:28.351983  286845 out.go:309] Setting ErrFile to fd 2...
I1206 19:08:28.351990  286845 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 19:08:28.352353  286845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
I1206 19:08:28.353077  286845 config.go:182] Loaded profile config "functional-796172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1206 19:08:28.353679  286845 config.go:182] Loaded profile config "functional-796172": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1206 19:08:28.354314  286845 cli_runner.go:164] Run: docker container inspect functional-796172 --format={{.State.Status}}
I1206 19:08:28.376745  286845 ssh_runner.go:195] Run: systemctl --version
I1206 19:08:28.376804  286845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-796172
I1206 19:08:28.400998  286845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/functional-796172/id_rsa Username:docker}
I1206 19:08:28.510635  286845 build_images.go:151] Building image from path: /tmp/build.4226448642.tar
I1206 19:08:28.510704  286845 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 19:08:28.523017  286845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4226448642.tar
I1206 19:08:28.528161  286845 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4226448642.tar: stat -c "%s %y" /var/lib/minikube/build/build.4226448642.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4226448642.tar': No such file or directory
I1206 19:08:28.528198  286845 ssh_runner.go:362] scp /tmp/build.4226448642.tar --> /var/lib/minikube/build/build.4226448642.tar (3072 bytes)
I1206 19:08:28.560252  286845 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4226448642
I1206 19:08:28.572072  286845 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4226448642 -xf /var/lib/minikube/build/build.4226448642.tar
I1206 19:08:28.583604  286845 docker.go:346] Building image: /var/lib/minikube/build/build.4226448642
I1206 19:08:28.583688  286845 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-796172 /var/lib/minikube/build/build.4226448642
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.7s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:5727ea1ecb8cda8023bf6163ae1069651da0842c44176586f68f039243ccf20f done
#8 naming to localhost/my-image:functional-796172 done
#8 DONE 0.0s
I1206 19:08:30.434278  286845 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-796172 /var/lib/minikube/build/build.4226448642: (1.850563652s)
I1206 19:08:30.434358  286845 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4226448642
I1206 19:08:30.446327  286845 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4226448642.tar
I1206 19:08:30.457926  286845 build_images.go:207] Built localhost/my-image:functional-796172 from /tmp/build.4226448642.tar
I1206 19:08:30.457956  286845 build_images.go:123] succeeded building to: functional-796172
I1206 19:08:30.457963  286845 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.040900345s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-796172
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image load --daemon gcr.io/google-containers/addon-resizer:functional-796172 --alsologtostderr
E1206 19:08:10.908459  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-796172 image load --daemon gcr.io/google-containers/addon-resizer:functional-796172 --alsologtostderr: (3.621677357s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image load --daemon gcr.io/google-containers/addon-resizer:functional-796172 --alsologtostderr
2023/12/06 19:08:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-796172 image load --daemon gcr.io/google-containers/addon-resizer:functional-796172 --alsologtostderr: (2.976235799s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-796172 docker-env) && out/minikube-linux-arm64 status -p functional-796172"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-796172 docker-env) && out/minikube-linux-arm64 status -p functional-796172": (1.228083882s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-796172 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.462929165s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-796172
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image load --daemon gcr.io/google-containers/addon-resizer:functional-796172 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-796172 image load --daemon gcr.io/google-containers/addon-resizer:functional-796172 --alsologtostderr: (3.691297237s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image save gcr.io/google-containers/addon-resizer:functional-796172 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-796172 image save gcr.io/google-containers/addon-resizer:functional-796172 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.068971161s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image rm gcr.io/google-containers/addon-resizer:functional-796172 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-796172 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.19508326s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-796172
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-796172 image save --daemon gcr.io/google-containers/addon-resizer:functional-796172 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-796172 image save --daemon gcr.io/google-containers/addon-resizer:functional-796172 --alsologtostderr: (1.013273196s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-796172
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-796172
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-796172
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-796172
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (38.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-650179 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-650179 --driver=docker  --container-runtime=docker: (38.02632173s)
--- PASS: TestImageBuild/serial/Setup (38.03s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-650179
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-650179: (2.074321382s)
--- PASS: TestImageBuild/serial/NormalBuild (2.07s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-650179
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-650179
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-650179
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (115.88s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-998555 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1206 19:09:32.831727  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-998555 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m55.87520756s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (115.88s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.09s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-998555 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-998555 addons enable ingress --alsologtostderr -v=5: (11.085891777s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.09s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.68s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-998555 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (86.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-693972 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E1206 19:12:26.775004  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
E1206 19:12:29.335237  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
E1206 19:12:34.456011  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
E1206 19:12:44.696361  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
E1206 19:13:05.176692  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
E1206 19:13:46.136985  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-693972 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m26.636294527s)
--- PASS: TestJSONOutput/start/Command (86.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-693972 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-693972 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-693972 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-693972 --output=json --user=testUser: (5.855147922s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-082878 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-082878 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (100.140786ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3e042734-843f-4552-9933-43c8e0cef586","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-082878] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4994e5e8-2ec3-4b11-bf27-8b265f344b67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17740"}}
	{"specversion":"1.0","id":"72c744cd-be47-44d1-a0c5-369ed5180926","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"080145e0-accd-4fab-8953-6c60f3d3c28f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig"}}
	{"specversion":"1.0","id":"971ff3d9-350d-4157-8cbe-f69902d1050c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube"}}
	{"specversion":"1.0","id":"5ad56a80-ec5e-4af6-b21e-3104b09dfc19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"70e3d7c0-59fd-4fac-855d-3eee55e692e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"79c32282-c568-42a5-86e5-a07fb035ee15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-082878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-082878
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-217109 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-217109 --network=: (33.916017828s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-217109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-217109
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-217109: (2.156798687s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.10s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-891647 --network=bridge
E1206 19:15:08.058407  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-891647 --network=bridge: (33.575099562s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-891647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-891647
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-891647: (2.074696457s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.68s)

                                                
                                    
x
+
TestKicExistingNetwork (38.12s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-660928 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-660928 --network=existing-network: (35.820387566s)
helpers_test.go:175: Cleaning up "existing-network-660928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-660928
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-660928: (2.125020638s)
--- PASS: TestKicExistingNetwork (38.12s)

                                                
                                    
x
+
TestKicCustomSubnet (39.03s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-258731 --subnet=192.168.60.0/24
E1206 19:16:26.192900  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 19:16:26.198537  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 19:16:26.208824  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 19:16:26.229070  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 19:16:26.269292  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 19:16:26.349534  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 19:16:26.509978  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 19:16:26.830448  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 19:16:27.471334  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-258731 --subnet=192.168.60.0/24: (36.77176757s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-258731 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-258731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-258731
E1206 19:16:28.752361  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-258731: (2.234251378s)
--- PASS: TestKicCustomSubnet (39.03s)

                                                
                                    
x
+
TestKicStaticIP (39.42s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-620145 --static-ip=192.168.200.200
E1206 19:16:31.312502  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 19:16:36.433215  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 19:16:46.674027  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 19:16:48.982694  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:17:07.154699  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-620145 --static-ip=192.168.200.200: (37.047367339s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-620145 ip
helpers_test.go:175: Cleaning up "static-ip-620145" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-620145
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-620145: (2.178795142s)
--- PASS: TestKicStaticIP (39.42s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (78.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-972679 --driver=docker  --container-runtime=docker
E1206 19:17:24.215783  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-972679 --driver=docker  --container-runtime=docker: (35.131351993s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-975247 --driver=docker  --container-runtime=docker
E1206 19:17:48.114932  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 19:17:51.902466  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-975247 --driver=docker  --container-runtime=docker: (37.687721558s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-972679
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-975247
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-975247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-975247
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-975247: (2.154575845s)
helpers_test.go:175: Cleaning up "first-972679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-972679
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-972679: (2.212914464s)
--- PASS: TestMinikubeProfile (78.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-393080 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-393080 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.948720099s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-393080 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (11.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-395189 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-395189 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (10.83403462s)
--- PASS: TestMountStart/serial/StartWithMountSecond (11.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-395189 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.55s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-393080 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-393080 --alsologtostderr -v=5: (1.545381629s)
--- PASS: TestMountStart/serial/DeleteFirst (1.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-395189 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-395189
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-395189: (1.271266172s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-395189
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-395189: (7.879913703s)
--- PASS: TestMountStart/serial/RestartStopped (8.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-395189 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (82.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-498731 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1206 19:19:10.035470  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-498731 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m21.762146936s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (82.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (37.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-498731 -- rollout status deployment/busybox: (4.782799006s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- exec busybox-5bc68d56bd-dcttf -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- exec busybox-5bc68d56bd-mlmrl -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- exec busybox-5bc68d56bd-dcttf -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- exec busybox-5bc68d56bd-mlmrl -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- exec busybox-5bc68d56bd-dcttf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- exec busybox-5bc68d56bd-mlmrl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (37.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- exec busybox-5bc68d56bd-dcttf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- exec busybox-5bc68d56bd-dcttf -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- exec busybox-5bc68d56bd-mlmrl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498731 -- exec busybox-5bc68d56bd-mlmrl -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.29s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-498731 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-498731 -v 3 --alsologtostderr: (19.431722639s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-498731 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1206 19:21:26.193202  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/ProfileList (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (12.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 cp testdata/cp-test.txt multinode-498731:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 cp multinode-498731:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2875406453/001/cp-test_multinode-498731.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 cp multinode-498731:/home/docker/cp-test.txt multinode-498731-m02:/home/docker/cp-test_multinode-498731_multinode-498731-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731-m02 "sudo cat /home/docker/cp-test_multinode-498731_multinode-498731-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 cp multinode-498731:/home/docker/cp-test.txt multinode-498731-m03:/home/docker/cp-test_multinode-498731_multinode-498731-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731-m03 "sudo cat /home/docker/cp-test_multinode-498731_multinode-498731-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 cp testdata/cp-test.txt multinode-498731-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 cp multinode-498731-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2875406453/001/cp-test_multinode-498731-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 cp multinode-498731-m02:/home/docker/cp-test.txt multinode-498731:/home/docker/cp-test_multinode-498731-m02_multinode-498731.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731 "sudo cat /home/docker/cp-test_multinode-498731-m02_multinode-498731.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 cp multinode-498731-m02:/home/docker/cp-test.txt multinode-498731-m03:/home/docker/cp-test_multinode-498731-m02_multinode-498731-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731-m03 "sudo cat /home/docker/cp-test_multinode-498731-m02_multinode-498731-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 cp testdata/cp-test.txt multinode-498731-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 cp multinode-498731-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2875406453/001/cp-test_multinode-498731-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 cp multinode-498731-m03:/home/docker/cp-test.txt multinode-498731:/home/docker/cp-test_multinode-498731-m03_multinode-498731.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731 "sudo cat /home/docker/cp-test_multinode-498731-m03_multinode-498731.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 cp multinode-498731-m03:/home/docker/cp-test.txt multinode-498731-m02:/home/docker/cp-test_multinode-498731-m03_multinode-498731-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 ssh -n multinode-498731-m02 "sudo cat /home/docker/cp-test_multinode-498731-m03_multinode-498731-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (12.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-498731 node stop m03: (1.27968123s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-498731 status: exit status 7 (594.42493ms)

                                                
                                                
-- stdout --
	multinode-498731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-498731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-498731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-498731 status --alsologtostderr: exit status 7 (613.952444ms)

                                                
                                                
-- stdout --
	multinode-498731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-498731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-498731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 19:21:40.785905  351699 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:21:40.786231  351699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:21:40.786244  351699 out.go:309] Setting ErrFile to fd 2...
	I1206 19:21:40.786251  351699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:21:40.786639  351699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
	I1206 19:21:40.786897  351699 out.go:303] Setting JSON to false
	I1206 19:21:40.786991  351699 mustload.go:65] Loading cluster: multinode-498731
	I1206 19:21:40.788185  351699 config.go:182] Loaded profile config "multinode-498731": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1206 19:21:40.788226  351699 status.go:255] checking status of multinode-498731 ...
	I1206 19:21:40.788495  351699 notify.go:220] Checking for updates...
	I1206 19:21:40.789238  351699 cli_runner.go:164] Run: docker container inspect multinode-498731 --format={{.State.Status}}
	I1206 19:21:40.810411  351699 status.go:330] multinode-498731 host status = "Running" (err=<nil>)
	I1206 19:21:40.810436  351699 host.go:66] Checking if "multinode-498731" exists ...
	I1206 19:21:40.810791  351699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-498731
	I1206 19:21:40.831205  351699 host.go:66] Checking if "multinode-498731" exists ...
	I1206 19:21:40.831531  351699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 19:21:40.831577  351699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-498731
	I1206 19:21:40.862263  351699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/multinode-498731/id_rsa Username:docker}
	I1206 19:21:40.963044  351699 ssh_runner.go:195] Run: systemctl --version
	I1206 19:21:40.968777  351699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:21:40.982667  351699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 19:21:41.059004  351699 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2023-12-06 19:21:41.046171542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1206 19:21:41.059593  351699 kubeconfig.go:92] found "multinode-498731" server: "https://192.168.58.2:8443"
	I1206 19:21:41.059616  351699 api_server.go:166] Checking apiserver status ...
	I1206 19:21:41.059662  351699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 19:21:41.081490  351699 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2152/cgroup
	I1206 19:21:41.097986  351699 api_server.go:182] apiserver freezer: "8:freezer:/docker/74928b6f4a813ff12f2ec5dd4e654afa4a007e88d9835e8d6721628d471ee82e/kubepods/burstable/pod4c1584870b3ee334912848d6ec2eb67e/0142edb82348867b386f7b827029361932bbea4717531f6e140800b8e9b23f23"
	I1206 19:21:41.098069  351699 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/74928b6f4a813ff12f2ec5dd4e654afa4a007e88d9835e8d6721628d471ee82e/kubepods/burstable/pod4c1584870b3ee334912848d6ec2eb67e/0142edb82348867b386f7b827029361932bbea4717531f6e140800b8e9b23f23/freezer.state
	I1206 19:21:41.108886  351699 api_server.go:204] freezer state: "THAWED"
	I1206 19:21:41.108917  351699 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1206 19:21:41.118116  351699 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1206 19:21:41.118145  351699 status.go:421] multinode-498731 apiserver status = Running (err=<nil>)
	I1206 19:21:41.118156  351699 status.go:257] multinode-498731 status: &{Name:multinode-498731 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 19:21:41.118173  351699 status.go:255] checking status of multinode-498731-m02 ...
	I1206 19:21:41.118485  351699 cli_runner.go:164] Run: docker container inspect multinode-498731-m02 --format={{.State.Status}}
	I1206 19:21:41.141292  351699 status.go:330] multinode-498731-m02 host status = "Running" (err=<nil>)
	I1206 19:21:41.141324  351699 host.go:66] Checking if "multinode-498731-m02" exists ...
	I1206 19:21:41.141641  351699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-498731-m02
	I1206 19:21:41.160249  351699 host.go:66] Checking if "multinode-498731-m02" exists ...
	I1206 19:21:41.160666  351699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 19:21:41.160717  351699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-498731-m02
	I1206 19:21:41.179389  351699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/17740-239434/.minikube/machines/multinode-498731-m02/id_rsa Username:docker}
	I1206 19:21:41.282773  351699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 19:21:41.297696  351699 status.go:257] multinode-498731-m02 status: &{Name:multinode-498731-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1206 19:21:41.297731  351699 status.go:255] checking status of multinode-498731-m03 ...
	I1206 19:21:41.298049  351699 cli_runner.go:164] Run: docker container inspect multinode-498731-m03 --format={{.State.Status}}
	I1206 19:21:41.322137  351699 status.go:330] multinode-498731-m03 host status = "Stopped" (err=<nil>)
	I1206 19:21:41.322162  351699 status.go:343] host is not running, skipping remaining checks
	I1206 19:21:41.322170  351699 status.go:257] multinode-498731-m03 status: &{Name:multinode-498731-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (14.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 node start m03 --alsologtostderr
E1206 19:21:48.982565  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:21:53.876180  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-498731 node start m03 --alsologtostderr: (13.789420973s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (14.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (129.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-498731
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-498731
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-498731: (22.816338542s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-498731 --wait=true -v=8 --alsologtostderr
E1206 19:22:24.216076  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
E1206 19:23:12.033022  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-498731 --wait=true -v=8 --alsologtostderr: (1m46.068305345s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-498731
--- PASS: TestMultiNode/serial/RestartKeepsNodes (129.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-498731 node delete m03: (4.506129286s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-498731 stop: (21.489819144s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-498731 status: exit status 7 (106.96419ms)

                                                
                                                
-- stdout --
	multinode-498731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-498731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-498731 status --alsologtostderr: exit status 7 (106.707824ms)

                                                
                                                
-- stdout --
	multinode-498731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-498731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 19:24:32.098036  367791 out.go:296] Setting OutFile to fd 1 ...
	I1206 19:24:32.098224  367791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:24:32.098237  367791 out.go:309] Setting ErrFile to fd 2...
	I1206 19:24:32.098244  367791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 19:24:32.098553  367791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17740-239434/.minikube/bin
	I1206 19:24:32.098815  367791 out.go:303] Setting JSON to false
	I1206 19:24:32.098936  367791 mustload.go:65] Loading cluster: multinode-498731
	I1206 19:24:32.098970  367791 notify.go:220] Checking for updates...
	I1206 19:24:32.099370  367791 config.go:182] Loaded profile config "multinode-498731": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1206 19:24:32.099381  367791 status.go:255] checking status of multinode-498731 ...
	I1206 19:24:32.099910  367791 cli_runner.go:164] Run: docker container inspect multinode-498731 --format={{.State.Status}}
	I1206 19:24:32.120019  367791 status.go:330] multinode-498731 host status = "Stopped" (err=<nil>)
	I1206 19:24:32.120043  367791 status.go:343] host is not running, skipping remaining checks
	I1206 19:24:32.120050  367791 status.go:257] multinode-498731 status: &{Name:multinode-498731 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 19:24:32.120082  367791 status.go:255] checking status of multinode-498731-m02 ...
	I1206 19:24:32.120442  367791 cli_runner.go:164] Run: docker container inspect multinode-498731-m02 --format={{.State.Status}}
	I1206 19:24:32.139599  367791 status.go:330] multinode-498731-m02 host status = "Stopped" (err=<nil>)
	I1206 19:24:32.139618  367791 status.go:343] host is not running, skipping remaining checks
	I1206 19:24:32.139625  367791 status.go:257] multinode-498731-m02 status: &{Name:multinode-498731-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (84.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-498731 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-498731 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m23.407590645s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498731 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (84.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-498731
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-498731-m02 --driver=docker  --container-runtime=docker
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-498731-m02 --driver=docker  --container-runtime=docker: exit status 14 (112.924357ms)

                                                
                                                
-- stdout --
	* [multinode-498731-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-498731-m02' is duplicated with machine name 'multinode-498731-m02' in profile 'multinode-498731'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-498731-m03 --driver=docker  --container-runtime=docker
E1206 19:26:26.197814  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-498731-m03 --driver=docker  --container-runtime=docker: (38.809379011s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-498731
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-498731: exit status 80 (383.638285ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-498731
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-498731-m03 already exists in multinode-498731-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-498731-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-498731-m03: (2.169733277s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.55s)

                                                
                                    
x
+
TestPreload (198.6s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-373873 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E1206 19:26:48.982865  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:27:24.215594  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-373873 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m41.134423976s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-373873 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-373873 image pull gcr.io/k8s-minikube/busybox: (1.39490384s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-373873
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-373873: (10.838151608s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-373873 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E1206 19:28:47.263075  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-373873 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (1m22.354934133s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-373873 image list
helpers_test.go:175: Cleaning up "test-preload-373873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-373873
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-373873: (2.577200732s)
--- PASS: TestPreload (198.60s)

                                                
                                    
x
+
TestScheduledStopUnix (104.72s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-500578 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-500578 --memory=2048 --driver=docker  --container-runtime=docker: (30.954742181s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-500578 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-500578 -n scheduled-stop-500578
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-500578 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-500578 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-500578 -n scheduled-stop-500578
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-500578
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-500578 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1206 19:31:26.193671  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-500578
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-500578: exit status 7 (85.259244ms)

                                                
                                                
-- stdout --
	scheduled-stop-500578
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-500578 -n scheduled-stop-500578
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-500578 -n scheduled-stop-500578: exit status 7 (91.663751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-500578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-500578
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-500578: (1.840397435s)
--- PASS: TestScheduledStopUnix (104.72s)

                                                
                                    
x
+
TestSkaffold (113.19s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe762633686 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-228311 --memory=2600 --driver=docker  --container-runtime=docker
E1206 19:31:48.982726  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-228311 --memory=2600 --driver=docker  --container-runtime=docker: (33.677726795s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe762633686 run --minikube-profile skaffold-228311 --kube-context skaffold-228311 --status-check=true --port-forward=false --interactive=false
E1206 19:32:24.215819  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
E1206 19:32:49.236438  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe762633686 run --minikube-profile skaffold-228311 --kube-context skaffold-228311 --status-check=true --port-forward=false --interactive=false: (1m4.221298011s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6d8df87f96-bdd6c" [ced211fa-6705-416b-bc7c-0348f09b31ef] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.024023662s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-8566784774-kzvbn" [c19165f7-d9d6-4c2c-9b4d-74a783c18bc5] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.010422867s
helpers_test.go:175: Cleaning up "skaffold-228311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-228311
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-228311: (3.032975456s)
--- PASS: TestSkaffold (113.19s)

                                                
                                    
x
+
TestInsufficientStorage (12.87s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-490600 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-490600 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.394590031s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ca0694ef-14b8-439f-b8dc-cc6a730a6ef9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-490600] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"61a7498a-33e0-40d3-b628-147a9be8e235","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17740"}}
	{"specversion":"1.0","id":"4c18aea1-a958-4aed-919c-cfbc6ed3930e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b641e67d-d9db-4224-a399-e7385f4d8d3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig"}}
	{"specversion":"1.0","id":"3139e6f1-7071-4fb5-9fb0-5ce53489a1f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube"}}
	{"specversion":"1.0","id":"d0e4ee11-4021-4a34-b26f-29beab41bccb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d923e902-98af-4b7c-9993-453c411a0176","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8e443f23-8d29-4062-9261-6b0f11232ac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"193d88c1-b22d-4544-ae25-61d125e71564","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"594db47f-a75a-4008-856a-03f11a78046a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"57d19260-2058-4207-898b-6848791cb2bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"27f71f21-6f86-4573-b112-59eab79b094e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-490600 in cluster insufficient-storage-490600","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"de1a5439-3be3-4ae1-8929-e29c51fdee2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ef663029-9665-4a90-a126-b85040e4e34f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4698cfd-25b7-4cfa-bcac-09b24cd14ba5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-490600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-490600 --output=json --layout=cluster: exit status 7 (346.358753ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-490600","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-490600","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:33:49.582227  404445 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-490600" does not appear in /home/jenkins/minikube-integration/17740-239434/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-490600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-490600 --output=json --layout=cluster: exit status 7 (342.067553ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-490600","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-490600","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 19:33:49.924819  404498 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-490600" does not appear in /home/jenkins/minikube-integration/17740-239434/kubeconfig
	E1206 19:33:49.937230  404498 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/insufficient-storage-490600/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-490600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-490600
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-490600: (1.782623611s)
--- PASS: TestInsufficientStorage (12.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (125.83s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.1222664754.exe start -p running-upgrade-534573 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1206 19:43:53.461619  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.1222664754.exe start -p running-upgrade-534573 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m7.505662378s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-534573 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-534573 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.726187294s)
helpers_test.go:175: Cleaning up "running-upgrade-534573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-534573
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-534573: (2.48554143s)
--- PASS: TestRunningBinaryUpgrade (125.83s)

                                                
                                    
x
+
TestKubernetesUpgrade (399.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-878439 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1206 19:36:26.193242  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-878439 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m12.781544575s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-878439
E1206 19:36:48.982243  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-878439: (1.313309407s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-878439 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-878439 status --format={{.Host}}: exit status 7 (82.348572ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-878439 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1206 19:37:24.215766  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
E1206 19:38:25.775805  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:38:25.781065  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:38:25.791347  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:38:25.811650  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:38:25.851933  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:38:25.932326  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:38:26.092778  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:38:26.413411  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:38:27.054431  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:38:28.334800  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:38:30.896481  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:38:36.017557  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:38:46.257962  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:39:06.738739  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:39:47.699505  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:39:52.033650  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-878439 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m44.259712445s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-878439 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-878439 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-878439 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (115.76696ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-878439] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-878439
	    minikube start -p kubernetes-upgrade-878439 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8784392 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-878439 --kubernetes-version=v1.29.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-878439 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1206 19:41:48.982560  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-878439 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.471253854s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-878439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-878439
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-878439: (2.441907139s)
--- PASS: TestKubernetesUpgrade (399.61s)

                                                
                                    
x
+
TestPause/serial/Start (60.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-263168 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-263168 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m0.891764333s)
--- PASS: TestPause/serial/Start (60.89s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-263168 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-263168 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.978584515s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.00s)

                                                
                                    
x
+
TestPause/serial/Pause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-263168 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.94s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-263168 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-263168 --output=json --layout=cluster: exit status 2 (460.368802ms)

                                                
                                                
-- stdout --
	{"Name":"pause-263168","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-263168","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.46s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-263168 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.05s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-263168 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-263168 --alsologtostderr -v=5: (1.049503863s)
--- PASS: TestPause/serial/PauseAgain (1.05s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.61s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-263168 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-263168 --alsologtostderr -v=5: (2.614620947s)
--- PASS: TestPause/serial/DeletePaused (2.61s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.2s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-263168
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-263168: exit status 1 (20.655799ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-263168: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (129.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.3047550002.exe start -p stopped-upgrade-886251 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1206 19:42:24.216187  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.3047550002.exe start -p stopped-upgrade-886251 --memory=2200 --vm-driver=docker  --container-runtime=docker: (58.65786314s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.3047550002.exe -p stopped-upgrade-886251 stop
E1206 19:43:25.776247  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.3047550002.exe -p stopped-upgrade-886251 stop: (10.872615076s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-886251 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-886251 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m0.183451986s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (129.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-886251
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-886251: (2.467724783s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-550884 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-550884 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (158.211771ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-550884] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17740-239434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17740-239434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-550884 --driver=docker  --container-runtime=docker
E1206 19:45:27.264097  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-550884 --driver=docker  --container-runtime=docker: (45.195161686s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-550884 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-550884 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-550884 --no-kubernetes --driver=docker  --container-runtime=docker: (16.375413897s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-550884 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-550884 status -o json: exit status 2 (505.12811ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-550884","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-550884
E1206 19:46:26.193182  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-550884: (1.870188732s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (14.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-550884 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-550884 --no-kubernetes --driver=docker  --container-runtime=docker: (14.328866734s)
--- PASS: TestNoKubernetes/serial/Start (14.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-550884 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-550884 "sudo systemctl is-active --quiet service kubelet": exit status 1 (671.73409ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-550884
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-550884: (1.344707551s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-550884 --driver=docker  --container-runtime=docker
E1206 19:46:48.982873  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-550884 --driver=docker  --container-runtime=docker: (8.528031391s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-550884 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-550884 "sudo systemctl is-active --quiet service kubelet": exit status 1 (455.854594ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (335.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-250339 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E1206 19:48:25.776387  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:49:29.237258  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-250339 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (5m35.939921725s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (335.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-877831 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1
E1206 19:51:26.192895  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-877831 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1: (54.925682229s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-877831 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3a642a35-e379-4833-8074-2852e163ddc2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3a642a35-e379-4833-8074-2852e163ddc2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.037528066s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-877831 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-877831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-877831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.066996575s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-877831 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-877831 --alsologtostderr -v=3
E1206 19:51:48.982843  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-877831 --alsologtostderr -v=3: (11.19949809s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-877831 -n no-preload-877831
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-877831 -n no-preload-877831: exit status 7 (90.702684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-877831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (350.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-877831 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1
E1206 19:52:24.215546  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
E1206 19:53:25.775512  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-877831 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1: (5m50.053477907s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-877831 -n no-preload-877831
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (350.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-250339 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [370410b4-9119-4ff2-884b-82bf5e45a531] Pending
helpers_test.go:344: "busybox" [370410b4-9119-4ff2-884b-82bf5e45a531] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [370410b4-9119-4ff2-884b-82bf5e45a531] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.034675423s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-250339 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-250339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-250339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.191159031s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-250339 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-250339 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-250339 --alsologtostderr -v=3: (10.973955753s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-250339 -n old-k8s-version-250339
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-250339 -n old-k8s-version-250339: exit status 7 (114.842543ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-250339 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (425.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-250339 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E1206 19:54:48.821861  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 19:56:26.193541  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 19:56:32.033843  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:56:48.982690  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 19:57:24.215759  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-250339 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (7m5.000834079s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-250339 -n old-k8s-version-250339
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (425.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6f5k4" [b6791bea-6066-4272-9da4-098b3f592aeb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6f5k4" [b6791bea-6066-4272-9da4-098b3f592aeb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.028995522s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6f5k4" [b6791bea-6066-4272-9da4-098b3f592aeb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014079552s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-877831 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-877831 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-877831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-877831 -n no-preload-877831
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-877831 -n no-preload-877831: exit status 2 (380.684291ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-877831 -n no-preload-877831
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-877831 -n no-preload-877831: exit status 2 (394.549883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-877831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-877831 -n no-preload-877831
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-877831 -n no-preload-877831
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-056170 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1206 19:58:25.776078  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-056170 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (49.515162943s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-056170 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ee7cb8d9-b921-448c-991e-2d5314463de2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ee7cb8d9-b921-448c-991e-2d5314463de2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.050208469s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-056170 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-056170 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-056170 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.149133572s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-056170 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-056170 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-056170 --alsologtostderr -v=3: (10.954425502s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-056170 -n embed-certs-056170
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-056170 -n embed-certs-056170: exit status 7 (92.597559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-056170 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (356.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-056170 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-056170 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (5m56.033558979s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-056170 -n embed-certs-056170
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (356.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7lx8j" [9c7f3149-7b4d-43d7-9fd0-d742a9843f82] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025407478s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7lx8j" [9c7f3149-7b4d-43d7-9fd0-d742a9843f82] Running
E1206 20:01:26.193590  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01116441s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-250339 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-250339 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-250339 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-250339 -n old-k8s-version-250339
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-250339 -n old-k8s-version-250339: exit status 2 (394.925241ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-250339 -n old-k8s-version-250339
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-250339 -n old-k8s-version-250339: exit status 2 (390.308932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-250339 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-250339 -n old-k8s-version-250339
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-250339 -n old-k8s-version-250339
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-596991 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1206 20:01:35.599290  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:01:35.604572  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:01:35.614801  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:01:35.635074  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:01:35.675326  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:01:35.755596  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:01:35.916103  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:01:36.236836  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:01:36.877530  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:01:38.157709  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:01:40.717897  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:01:45.839066  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:01:48.982418  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 20:01:56.079935  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:02:07.265107  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
E1206 20:02:16.560828  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:02:24.215294  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-596991 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (50.404132728s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-596991 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2bffd542-e454-4b11-81e0-637c916be146] Pending
helpers_test.go:344: "busybox" [2bffd542-e454-4b11-81e0-637c916be146] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2bffd542-e454-4b11-81e0-637c916be146] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.036167438s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-596991 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-596991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-596991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.142273689s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-596991 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-596991 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-596991 --alsologtostderr -v=3: (10.960885587s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-596991 -n default-k8s-diff-port-596991
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-596991 -n default-k8s-diff-port-596991: exit status 7 (97.021293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-596991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (328.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-596991 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1206 20:02:57.521451  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:03:25.775294  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
E1206 20:03:50.603661  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:03:50.608922  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:03:50.619093  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:03:50.639307  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:03:50.679566  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:03:50.759816  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:03:50.920355  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:03:51.240868  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:03:51.881810  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:03:53.162876  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:03:55.723748  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:04:00.844890  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:04:11.086040  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:04:19.442297  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:04:31.567012  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:05:12.527736  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-596991 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (5m27.794755199s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-596991 -n default-k8s-diff-port-596991
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (328.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sqthq" [9009c23c-e8d9-48cb-843a-c12a9ed6ee2e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.026563247s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sqthq" [9009c23c-e8d9-48cb-843a-c12a9ed6ee2e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011180875s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-056170 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-056170 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-056170 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-056170 -n embed-certs-056170
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-056170 -n embed-certs-056170: exit status 2 (397.47682ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-056170 -n embed-certs-056170
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-056170 -n embed-certs-056170: exit status 2 (409.562449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-056170 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-056170 -n embed-certs-056170
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-056170 -n embed-certs-056170
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-949575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1
E1206 20:06:09.238195  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-949575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1: (48.924102778s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-949575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1206 20:06:26.193251  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-949575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.339269344s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-949575 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-949575 --alsologtostderr -v=3: (5.784507357s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-949575 -n newest-cni-949575
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-949575 -n newest-cni-949575: exit status 7 (103.915065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-949575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-949575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1
E1206 20:06:34.448002  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
E1206 20:06:35.599096  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
E1206 20:06:48.982676  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
E1206 20:07:03.282826  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-949575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1: (32.207049996s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-949575 -n newest-cni-949575
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-949575 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-949575 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-949575 -n newest-cni-949575
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-949575 -n newest-cni-949575: exit status 2 (400.715634ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-949575 -n newest-cni-949575
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-949575 -n newest-cni-949575: exit status 2 (387.339536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-949575 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-949575 -n newest-cni-949575
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-949575 -n newest-cni-949575
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (92.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E1206 20:07:24.215305  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/functional-796172/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m32.863283165s)
--- PASS: TestNetworkPlugins/group/auto/Start (92.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vcbnn" [2dd1a35e-3a09-45d8-8b1c-f0e579740d2c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vcbnn" [2dd1a35e-3a09-45d8-8b1c-f0e579740d2c] Running
E1206 20:08:25.775771  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.027933196s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vcbnn" [2dd1a35e-3a09-45d8-8b1c-f0e579740d2c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010961569s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-596991 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-596991 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-596991 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-596991 -n default-k8s-diff-port-596991
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-596991 -n default-k8s-diff-port-596991: exit status 2 (408.914789ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-596991 -n default-k8s-diff-port-596991
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-596991 -n default-k8s-diff-port-596991: exit status 2 (409.905718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-596991 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-596991 -n default-k8s-diff-port-596991
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-596991 -n default-k8s-diff-port-596991
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.31s)
E1206 20:16:26.192896  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/ingress-addon-legacy-998555/client.crt: no such file or directory
E1206 20:16:29.559352  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/auto-471467/client.crt: no such file or directory
E1206 20:16:31.698177  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/calico-471467/client.crt: no such file or directory
E1206 20:16:35.599063  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/no-preload-877831/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m6.341115165s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-471467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-471467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g4cqj" [77cfbb62-e1d1-45a0-b09a-9d6b3b9b5dab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1206 20:08:50.603624  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/old-k8s-version-250339/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-g4cqj" [77cfbb62-e1d1-45a0-b09a-9d6b3b9b5dab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.015207191s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-471467 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m24.338859033s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-c7jbj" [8bd1beaa-a801-4e73-a371-a234cee2222b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.038180564s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-471467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-471467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b7gml" [23c1e194-d38c-45b7-8816-30d395096d3b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b7gml" [23c1e194-d38c-45b7-8816-30d395096d3b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.013719919s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-471467 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m9.742998716s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fw9j4" [421ae659-2293-46e8-b225-72fd12e65626] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.045375525s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-471467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-471467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rnvlp" [045dd177-2e0d-45cb-8304-fdddfa9ded38] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rnvlp" [045dd177-2e0d-45cb-8304-fdddfa9ded38] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.012795616s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-471467 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (60.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m0.197008822s)
--- PASS: TestNetworkPlugins/group/false/Start (60.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-471467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-471467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-86mk2" [74289307-0d62-42e0-864b-7f24a289e7f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1206 20:11:48.982850  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/addons-440984/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-86mk2" [74289307-0d62-42e0-864b-7f24a289e7f1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.013276835s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-471467 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (57.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1206 20:12:28.034808  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/default-k8s-diff-port-596991/client.crt: no such file or directory
E1206 20:12:30.595721  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/default-k8s-diff-port-596991/client.crt: no such file or directory
E1206 20:12:35.716309  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/default-k8s-diff-port-596991/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (57.835463255s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (57.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-471467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-471467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xp66b" [a59046b9-d45d-4570-955b-c3c90fc8c879] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xp66b" [a59046b9-d45d-4570-955b-c3c90fc8c879] Running
E1206 20:12:45.957305  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/default-k8s-diff-port-596991/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.017012766s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-471467 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m5.904301419s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-471467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-471467 replace --force -f testdata/netcat-deployment.yaml
E1206 20:13:25.777470  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/skaffold-228311/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x2n7p" [3323d3a2-158d-4a2d-9f88-c846c27f2d00] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x2n7p" [3323d3a2-158d-4a2d-9f88-c846c27f2d00] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.013306926s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-471467 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E1206 20:14:06.196684  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/auto-471467/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m31.131065154s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ntlb6" [38e60ae0-dbf5-401f-9a9d-2dbdbcb357d7] Running
E1206 20:14:26.677462  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/auto-471467/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.03496233s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-471467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-471467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2gmz5" [fe3b2ae2-e616-4873-aa0a-0e777b483c01] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2gmz5" [fe3b2ae2-e616-4873-aa0a-0e777b483c01] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.016400284s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-471467 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (88.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E1206 20:15:09.321435  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/default-k8s-diff-port-596991/client.crt: no such file or directory
E1206 20:15:25.547353  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/kindnet-471467/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-471467 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m28.071226497s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (88.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-471467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-471467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4qkwd" [ced98ecc-765c-49f5-b3df-a1384fa2c40d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4qkwd" [ced98ecc-765c-49f5-b3df-a1384fa2c40d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.01872052s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-471467 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-471467 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-471467 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5m4xn" [4eb77dd1-2ff0-4785-80a2-a7144258f6c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5m4xn" [4eb77dd1-2ff0-4785-80a2-a7144258f6c6] Running
E1206 20:16:43.964090  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/custom-flannel-471467/client.crt: no such file or directory
E1206 20:16:43.969389  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/custom-flannel-471467/client.crt: no such file or directory
E1206 20:16:43.979625  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/custom-flannel-471467/client.crt: no such file or directory
E1206 20:16:43.999861  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/custom-flannel-471467/client.crt: no such file or directory
E1206 20:16:44.040242  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/custom-flannel-471467/client.crt: no such file or directory
E1206 20:16:44.120708  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/custom-flannel-471467/client.crt: no such file or directory
E1206 20:16:44.281209  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/custom-flannel-471467/client.crt: no such file or directory
E1206 20:16:44.601899  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/custom-flannel-471467/client.crt: no such file or directory
E1206 20:16:45.243024  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/custom-flannel-471467/client.crt: no such file or directory
E1206 20:16:46.523805  244814 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17740-239434/.minikube/profiles/custom-flannel-471467/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.010984892s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-471467 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-471467 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                    

Test skip (27/330)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.84s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-553370 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-553370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-553370
--- SKIP: TestDownloadOnlyKic (0.84s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-054715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-054715
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-471467 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-471467" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-471467

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-471467" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471467"

                                                
                                                
----------------------- debugLogs end: cilium-471467 [took: 5.676910988s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-471467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-471467
--- SKIP: TestNetworkPlugins/group/cilium (5.89s)

                                                
                                    
Copied to clipboard