Test Report: Docker_Linux_containerd_arm64 16865

                    
                      e527c943862622d235c52d3f78f307a89288bf9f:2023-08-17:30622
                    
                

Test fail (10/310)

x
+
TestAddons/parallel/Ingress (37.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-028423 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-028423 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-028423 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8c6b79f6-dae4-49ae-adbf-3764690d8526] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
2023/08/17 21:16:05 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:16:05 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
helpers_test.go:344: "nginx" [8c6b79f6-dae4-49ae-adbf-3764690d8526] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.013525537s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-028423 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-028423 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-028423 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
2023/08/17 21:16:13 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:16:15 [DEBUG] GET http://192.168.49.2:5000
2023/08/17 21:16:15 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:16:15 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/08/17 21:16:16 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:16:16 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/08/17 21:16:18 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:16:18 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/08/17 21:16:22 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:16:22 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.090865049s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-028423 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-028423 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-028423 addons disable ingress --alsologtostderr -v=1: (7.812866118s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-028423
helpers_test.go:235: (dbg) docker inspect addons-028423:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a8030c7881f3daa2b9e4f8df244d8d2d8b717546314b414701486b2e4121a18f",
	        "Created": "2023-08-17T21:12:34.988429295Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8797,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-17T21:12:35.356651379Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/a8030c7881f3daa2b9e4f8df244d8d2d8b717546314b414701486b2e4121a18f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a8030c7881f3daa2b9e4f8df244d8d2d8b717546314b414701486b2e4121a18f/hostname",
	        "HostsPath": "/var/lib/docker/containers/a8030c7881f3daa2b9e4f8df244d8d2d8b717546314b414701486b2e4121a18f/hosts",
	        "LogPath": "/var/lib/docker/containers/a8030c7881f3daa2b9e4f8df244d8d2d8b717546314b414701486b2e4121a18f/a8030c7881f3daa2b9e4f8df244d8d2d8b717546314b414701486b2e4121a18f-json.log",
	        "Name": "/addons-028423",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-028423:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-028423",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e2b1ee9e5a2e445eb7cb792b293597c11bbcd6ae14b025824ade6d90a614810f-init/diff:/var/lib/docker/overlay2/6e6597fd944d5f98ecbe7d9c5301a949ba6526f8982591cdfcbe3d11f113be4a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2b1ee9e5a2e445eb7cb792b293597c11bbcd6ae14b025824ade6d90a614810f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2b1ee9e5a2e445eb7cb792b293597c11bbcd6ae14b025824ade6d90a614810f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2b1ee9e5a2e445eb7cb792b293597c11bbcd6ae14b025824ade6d90a614810f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-028423",
	                "Source": "/var/lib/docker/volumes/addons-028423/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-028423",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-028423",
	                "name.minikube.sigs.k8s.io": "addons-028423",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "011f3f41c04aa2de86a8d93f6530e6dd891b49af5f1b7df8e9fed55c09da5820",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/011f3f41c04a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-028423": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a8030c7881f3",
	                        "addons-028423"
	                    ],
	                    "NetworkID": "d8d506c187019bcd3345e6778d855a17bbcecfb45ca2e9849a50bbba571abb2d",
	                    "EndpointID": "1b2df31eff3240d05139e79a12ecb014aa97e1dcedca06d3241aa23815aa8c86",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-028423 -n addons-028423
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-028423 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-028423 logs -n 25: (1.556242442s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-481885   | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |                     |
	|         | -p download-only-481885           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd    |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd    |                        |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-481885   | jenkins | v1.31.2 | 17 Aug 23 21:11 UTC |                     |
	|         | -p download-only-481885           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd    |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd    |                        |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-481885   | jenkins | v1.31.2 | 17 Aug 23 21:11 UTC |                     |
	|         | -p download-only-481885           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1 |                        |         |         |                     |                     |
	|         | --container-runtime=containerd    |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd    |                        |         |         |                     |                     |
	| delete  | --all                             | minikube               | jenkins | v1.31.2 | 17 Aug 23 21:12 UTC | 17 Aug 23 21:12 UTC |
	| delete  | -p download-only-481885           | download-only-481885   | jenkins | v1.31.2 | 17 Aug 23 21:12 UTC | 17 Aug 23 21:12 UTC |
	| delete  | -p download-only-481885           | download-only-481885   | jenkins | v1.31.2 | 17 Aug 23 21:12 UTC | 17 Aug 23 21:12 UTC |
	| start   | --download-only -p                | download-docker-403495 | jenkins | v1.31.2 | 17 Aug 23 21:12 UTC |                     |
	|         | download-docker-403495            |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd    |                        |         |         |                     |                     |
	| delete  | -p download-docker-403495         | download-docker-403495 | jenkins | v1.31.2 | 17 Aug 23 21:12 UTC | 17 Aug 23 21:12 UTC |
	| start   | --download-only -p                | binary-mirror-579967   | jenkins | v1.31.2 | 17 Aug 23 21:12 UTC |                     |
	|         | binary-mirror-579967              |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --binary-mirror                   |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39617            |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-579967           | binary-mirror-579967   | jenkins | v1.31.2 | 17 Aug 23 21:12 UTC | 17 Aug 23 21:12 UTC |
	| start   | -p addons-028423                  | addons-028423          | jenkins | v1.31.2 | 17 Aug 23 21:12 UTC | 17 Aug 23 21:14 UTC |
	|         | --wait=true --memory=4000         |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --addons=registry                 |                        |         |         |                     |                     |
	|         | --addons=metrics-server           |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots          |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver      |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                 |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner            |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget         |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd    |                        |         |         |                     |                     |
	|         | --addons=ingress                  |                        |         |         |                     |                     |
	|         | --addons=ingress-dns              |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p          | addons-028423          | jenkins | v1.31.2 | 17 Aug 23 21:14 UTC | 17 Aug 23 21:14 UTC |
	|         | addons-028423                     |                        |         |         |                     |                     |
	| addons  | enable headlamp                   | addons-028423          | jenkins | v1.31.2 | 17 Aug 23 21:14 UTC | 17 Aug 23 21:14 UTC |
	|         | -p addons-028423                  |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| ip      | addons-028423 ip                  | addons-028423          | jenkins | v1.31.2 | 17 Aug 23 21:14 UTC | 17 Aug 23 21:14 UTC |
	| addons  | addons-028423 addons              | addons-028423          | jenkins | v1.31.2 | 17 Aug 23 21:15 UTC | 17 Aug 23 21:15 UTC |
	|         | disable csi-hostpath-driver       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | addons-028423 addons              | addons-028423          | jenkins | v1.31.2 | 17 Aug 23 21:15 UTC | 17 Aug 23 21:15 UTC |
	|         | disable volumesnapshots           |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | addons-028423 addons              | addons-028423          | jenkins | v1.31.2 | 17 Aug 23 21:15 UTC | 17 Aug 23 21:15 UTC |
	|         | disable metrics-server            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p       | addons-028423          | jenkins | v1.31.2 | 17 Aug 23 21:15 UTC | 17 Aug 23 21:16 UTC |
	|         | addons-028423                     |                        |         |         |                     |                     |
	| ssh     | addons-028423 ssh curl -s         | addons-028423          | jenkins | v1.31.2 | 17 Aug 23 21:16 UTC | 17 Aug 23 21:16 UTC |
	|         | http://127.0.0.1/ -H 'Host:       |                        |         |         |                     |                     |
	|         | nginx.example.com'                |                        |         |         |                     |                     |
	| ip      | addons-028423 ip                  | addons-028423          | jenkins | v1.31.2 | 17 Aug 23 21:16 UTC | 17 Aug 23 21:16 UTC |
	| addons  | addons-028423 addons disable      | addons-028423          | jenkins | v1.31.2 | 17 Aug 23 21:16 UTC | 17 Aug 23 21:16 UTC |
	|         | ingress-dns --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| addons  | addons-028423 addons disable      | addons-028423          | jenkins | v1.31.2 | 17 Aug 23 21:16 UTC | 17 Aug 23 21:16 UTC |
	|         | ingress --alsologtostderr -v=1    |                        |         |         |                     |                     |
	| addons  | addons-028423 addons disable      | addons-028423          | jenkins | v1.31.2 | 17 Aug 23 21:16 UTC | 17 Aug 23 21:16 UTC |
	|         | registry --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	|---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:12:13
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:12:13.217624    8320 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:12:13.217748    8320 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:12:13.217758    8320 out.go:309] Setting ErrFile to fd 2...
	I0817 21:12:13.217763    8320 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:12:13.217999    8320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	I0817 21:12:13.218394    8320 out.go:303] Setting JSON to false
	I0817 21:12:13.219112    8320 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3272,"bootTime":1692303461,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0817 21:12:13.219174    8320 start.go:138] virtualization:  
	I0817 21:12:13.222093    8320 out.go:177] * [addons-028423] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0817 21:12:13.224503    8320 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:12:13.226472    8320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:12:13.224646    8320 notify.go:220] Checking for updates...
	I0817 21:12:13.230754    8320 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:12:13.232950    8320 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	I0817 21:12:13.235032    8320 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 21:12:13.236883    8320 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:12:13.238899    8320 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:12:13.264089    8320 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:12:13.264195    8320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:12:13.354028    8320 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-08-17 21:12:13.344173607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:12:13.354140    8320 docker.go:294] overlay module found
	I0817 21:12:13.357590    8320 out.go:177] * Using the docker driver based on user configuration
	I0817 21:12:13.359663    8320 start.go:298] selected driver: docker
	I0817 21:12:13.359677    8320 start.go:902] validating driver "docker" against <nil>
	I0817 21:12:13.359699    8320 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:12:13.360313    8320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:12:13.443725    8320 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-08-17 21:12:13.434329939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:12:13.443882    8320 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0817 21:12:13.444173    8320 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 21:12:13.446279    8320 out.go:177] * Using Docker driver with root privileges
	I0817 21:12:13.448299    8320 cni.go:84] Creating CNI manager for ""
	I0817 21:12:13.448323    8320 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:12:13.448340    8320 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 21:12:13.448356    8320 start_flags.go:319] config:
	{Name:addons-028423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-028423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:12:13.450349    8320 out.go:177] * Starting control plane node addons-028423 in cluster addons-028423
	I0817 21:12:13.452173    8320 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0817 21:12:13.454036    8320 out.go:177] * Pulling base image ...
	I0817 21:12:13.455991    8320 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime containerd
	I0817 21:12:13.456041    8320 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4
	I0817 21:12:13.456061    8320 cache.go:57] Caching tarball of preloaded images
	I0817 21:12:13.456074    8320 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:12:13.456131    8320 preload.go:174] Found /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 21:12:13.456141    8320 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on containerd
	I0817 21:12:13.456471    8320 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/config.json ...
	I0817 21:12:13.456497    8320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/config.json: {Name:mk8fafc61294bcc4c32b33e27841c82b6db70c75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:12:13.474302    8320 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0817 21:12:13.474416    8320 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0817 21:12:13.474440    8320 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0817 21:12:13.474447    8320 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0817 21:12:13.474455    8320 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0817 21:12:13.474463    8320 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache
	I0817 21:12:28.994283    8320 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball
	I0817 21:12:28.994342    8320 cache.go:195] Successfully downloaded all kic artifacts
	I0817 21:12:28.994370    8320 start.go:365] acquiring machines lock for addons-028423: {Name:mk2d66a68f76b2747c8f6f57a9903a933164bcef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:12:28.994491    8320 start.go:369] acquired machines lock for "addons-028423" in 98.649µs
	I0817 21:12:28.994528    8320 start.go:93] Provisioning new machine with config: &{Name:addons-028423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-028423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0817 21:12:28.994607    8320 start.go:125] createHost starting for "" (driver="docker")
	I0817 21:12:28.996861    8320 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0817 21:12:28.997093    8320 start.go:159] libmachine.API.Create for "addons-028423" (driver="docker")
	I0817 21:12:28.997117    8320 client.go:168] LocalClient.Create starting
	I0817 21:12:28.997235    8320 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem
	I0817 21:12:29.161268    8320 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem
	I0817 21:12:29.612312    8320 cli_runner.go:164] Run: docker network inspect addons-028423 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 21:12:29.632180    8320 cli_runner.go:211] docker network inspect addons-028423 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 21:12:29.632268    8320 network_create.go:281] running [docker network inspect addons-028423] to gather additional debugging logs...
	I0817 21:12:29.632287    8320 cli_runner.go:164] Run: docker network inspect addons-028423
	W0817 21:12:29.649111    8320 cli_runner.go:211] docker network inspect addons-028423 returned with exit code 1
	I0817 21:12:29.649137    8320 network_create.go:284] error running [docker network inspect addons-028423]: docker network inspect addons-028423: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-028423 not found
	I0817 21:12:29.649148    8320 network_create.go:286] output of [docker network inspect addons-028423]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-028423 not found
	
	** /stderr **
	I0817 21:12:29.649210    8320 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:12:29.666963    8320 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001169d70}
	I0817 21:12:29.667002    8320 network_create.go:123] attempt to create docker network addons-028423 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 21:12:29.667059    8320 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-028423 addons-028423
	I0817 21:12:29.735805    8320 network_create.go:107] docker network addons-028423 192.168.49.0/24 created
	I0817 21:12:29.735836    8320 kic.go:117] calculated static IP "192.168.49.2" for the "addons-028423" container
	I0817 21:12:29.735915    8320 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0817 21:12:29.752423    8320 cli_runner.go:164] Run: docker volume create addons-028423 --label name.minikube.sigs.k8s.io=addons-028423 --label created_by.minikube.sigs.k8s.io=true
	I0817 21:12:29.771524    8320 oci.go:103] Successfully created a docker volume addons-028423
	I0817 21:12:29.771617    8320 cli_runner.go:164] Run: docker run --rm --name addons-028423-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-028423 --entrypoint /usr/bin/test -v addons-028423:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0817 21:12:30.789945    8320 cli_runner.go:217] Completed: docker run --rm --name addons-028423-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-028423 --entrypoint /usr/bin/test -v addons-028423:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.018290151s)
	I0817 21:12:30.789971    8320 oci.go:107] Successfully prepared a docker volume addons-028423
	I0817 21:12:30.789989    8320 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime containerd
	I0817 21:12:30.790007    8320 kic.go:190] Starting extracting preloaded images to volume ...
	I0817 21:12:30.790104    8320 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-028423:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 21:12:34.898716    8320 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-028423:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.108563473s)
	I0817 21:12:34.898748    8320 kic.go:199] duration metric: took 4.108736 seconds to extract preloaded images to volume
	W0817 21:12:34.898882    8320 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0817 21:12:34.899001    8320 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 21:12:34.972194    8320 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-028423 --name addons-028423 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-028423 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-028423 --network addons-028423 --ip 192.168.49.2 --volume addons-028423:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0817 21:12:35.369198    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Running}}
	I0817 21:12:35.399845    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:12:35.429632    8320 cli_runner.go:164] Run: docker exec addons-028423 stat /var/lib/dpkg/alternatives/iptables
	I0817 21:12:35.504793    8320 oci.go:144] the created container "addons-028423" has a running status.
	I0817 21:12:35.504821    8320 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa...
	I0817 21:12:35.748218    8320 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 21:12:35.780115    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:12:35.810782    8320 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 21:12:35.810800    8320 kic_runner.go:114] Args: [docker exec --privileged addons-028423 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 21:12:35.908198    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:12:35.930821    8320 machine.go:88] provisioning docker machine ...
	I0817 21:12:35.930848    8320 ubuntu.go:169] provisioning hostname "addons-028423"
	I0817 21:12:35.930911    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:12:35.958277    8320 main.go:141] libmachine: Using SSH client type: native
	I0817 21:12:35.958774    8320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0817 21:12:35.958794    8320 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-028423 && echo "addons-028423" | sudo tee /etc/hostname
	I0817 21:12:35.959385    8320 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41270->127.0.0.1:32772: read: connection reset by peer
	I0817 21:12:39.101661    8320 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-028423
	
	I0817 21:12:39.101750    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:12:39.121193    8320 main.go:141] libmachine: Using SSH client type: native
	I0817 21:12:39.121618    8320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0817 21:12:39.121642    8320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-028423' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-028423/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-028423' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:12:39.251802    8320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:12:39.251891    8320 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-2431/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-2431/.minikube}
	I0817 21:12:39.251930    8320 ubuntu.go:177] setting up certificates
	I0817 21:12:39.251960    8320 provision.go:83] configureAuth start
	I0817 21:12:39.252057    8320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-028423
	I0817 21:12:39.270827    8320 provision.go:138] copyHostCerts
	I0817 21:12:39.270908    8320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem (1078 bytes)
	I0817 21:12:39.271030    8320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem (1123 bytes)
	I0817 21:12:39.271082    8320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem (1675 bytes)
	I0817 21:12:39.271125    8320 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem org=jenkins.addons-028423 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-028423]
	I0817 21:12:40.065587    8320 provision.go:172] copyRemoteCerts
	I0817 21:12:40.065655    8320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:12:40.065698    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:12:40.087782    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:12:40.185864    8320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:12:40.215341    8320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0817 21:12:40.243723    8320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 21:12:40.271952    8320 provision.go:86] duration metric: configureAuth took 1.019960752s
	I0817 21:12:40.272019    8320 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:12:40.272221    8320 config.go:182] Loaded profile config "addons-028423": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:12:40.272232    8320 machine.go:91] provisioned docker machine in 4.341394844s
	I0817 21:12:40.272239    8320 client.go:171] LocalClient.Create took 11.27511768s
	I0817 21:12:40.272257    8320 start.go:167] duration metric: libmachine.API.Create for "addons-028423" took 11.275164645s
	I0817 21:12:40.272268    8320 start.go:300] post-start starting for "addons-028423" (driver="docker")
	I0817 21:12:40.272276    8320 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:12:40.272336    8320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:12:40.272381    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:12:40.290961    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:12:40.386125    8320 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:12:40.390173    8320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:12:40.390210    8320 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:12:40.390222    8320 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:12:40.390228    8320 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0817 21:12:40.390237    8320 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/addons for local assets ...
	I0817 21:12:40.390307    8320 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/files for local assets ...
	I0817 21:12:40.390334    8320 start.go:303] post-start completed in 118.060553ms
	I0817 21:12:40.390671    8320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-028423
	I0817 21:12:40.409571    8320 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/config.json ...
	I0817 21:12:40.409842    8320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:12:40.409893    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:12:40.434063    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:12:40.524531    8320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:12:40.529877    8320 start.go:128] duration metric: createHost completed in 11.535256889s
	I0817 21:12:40.529900    8320 start.go:83] releasing machines lock for "addons-028423", held for 11.535395849s
	I0817 21:12:40.529964    8320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-028423
	I0817 21:12:40.547630    8320 ssh_runner.go:195] Run: cat /version.json
	I0817 21:12:40.547661    8320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:12:40.547687    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:12:40.547728    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:12:40.570487    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:12:40.589478    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:12:40.666898    8320 ssh_runner.go:195] Run: systemctl --version
	I0817 21:12:40.800116    8320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:12:40.805757    8320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0817 21:12:40.835048    8320 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0817 21:12:40.835126    8320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:12:40.867248    8320 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0817 21:12:40.867272    8320 start.go:466] detecting cgroup driver to use...
	I0817 21:12:40.867333    8320 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0817 21:12:40.867412    8320 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0817 21:12:40.881942    8320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0817 21:12:40.895609    8320 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:12:40.895675    8320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:12:40.911662    8320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:12:40.927974    8320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:12:41.017467    8320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:12:41.125184    8320 docker.go:212] disabling docker service ...
	I0817 21:12:41.125276    8320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:12:41.147242    8320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:12:41.161200    8320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:12:41.269685    8320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:12:41.370828    8320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:12:41.384741    8320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:12:41.405625    8320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0817 21:12:41.418409    8320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0817 21:12:41.431258    8320 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0817 21:12:41.431400    8320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0817 21:12:41.443602    8320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:12:41.456526    8320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0817 21:12:41.469179    8320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:12:41.483764    8320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:12:41.495804    8320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0817 21:12:41.509867    8320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:12:41.522416    8320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:12:41.532542    8320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:12:41.631269    8320 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0817 21:12:41.713432    8320 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 21:12:41.713577    8320 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0817 21:12:41.718705    8320 start.go:534] Will wait 60s for crictl version
	I0817 21:12:41.718862    8320 ssh_runner.go:195] Run: which crictl
	I0817 21:12:41.723932    8320 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:12:41.779006    8320 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0817 21:12:41.779166    8320 ssh_runner.go:195] Run: containerd --version
	I0817 21:12:41.807245    8320 ssh_runner.go:195] Run: containerd --version
	I0817 21:12:41.843187    8320 out.go:177] * Preparing Kubernetes v1.27.4 on containerd 1.6.21 ...
	I0817 21:12:41.845394    8320 cli_runner.go:164] Run: docker network inspect addons-028423 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:12:41.863316    8320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 21:12:41.867864    8320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:12:41.881293    8320 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime containerd
	I0817 21:12:41.881364    8320 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:12:41.923232    8320 containerd.go:604] all images are preloaded for containerd runtime.
	I0817 21:12:41.923253    8320 containerd.go:518] Images already preloaded, skipping extraction
	I0817 21:12:41.923304    8320 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:12:41.964091    8320 containerd.go:604] all images are preloaded for containerd runtime.
	I0817 21:12:41.964111    8320 cache_images.go:84] Images are preloaded, skipping loading
	I0817 21:12:41.964217    8320 ssh_runner.go:195] Run: sudo crictl info
	I0817 21:12:42.005055    8320 cni.go:84] Creating CNI manager for ""
	I0817 21:12:42.005078    8320 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:12:42.005090    8320 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:12:42.005108    8320 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-028423 NodeName:addons-028423 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 21:12:42.005277    8320 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-028423"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:12:42.005351    8320 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-028423 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-028423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:12:42.005426    8320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 21:12:42.017643    8320 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:12:42.017734    8320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 21:12:42.029350    8320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0817 21:12:42.053691    8320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 21:12:42.078322    8320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0817 21:12:42.103017    8320 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 21:12:42.108080    8320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:12:42.122921    8320 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423 for IP: 192.168.49.2
	I0817 21:12:42.122952    8320 certs.go:190] acquiring lock for shared ca certs: {Name:mk058988a603cd06c6d056488c4bdaf60bd886a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:12:42.123200    8320 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.key
	I0817 21:12:42.348234    8320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt ...
	I0817 21:12:42.348267    8320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt: {Name:mk30e94ca4a938caac34c243204462e44d5cd8cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:12:42.348461    8320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-2431/.minikube/ca.key ...
	I0817 21:12:42.348473    8320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/ca.key: {Name:mk0d02f2ed9e8fb6d2335c795d58565ced9f8582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:12:42.348565    8320 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.key
	I0817 21:12:43.358076    8320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.crt ...
	I0817 21:12:43.358107    8320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.crt: {Name:mk7e0f2181ca7ffa50a0b1279f1e4ac24120afc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:12:43.358297    8320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.key ...
	I0817 21:12:43.358310    8320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.key: {Name:mk0f6fac1657698f9df957c7c298f2eeec025ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:12:43.358428    8320 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.key
	I0817 21:12:43.358467    8320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt with IP's: []
	I0817 21:12:43.593267    8320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt ...
	I0817 21:12:43.593295    8320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: {Name:mk81b264c24bded38cb6eeb022215bd15ceb4a97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:12:43.593472    8320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.key ...
	I0817 21:12:43.593484    8320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.key: {Name:mkc0a4786e83cd18e2fa399d89177e44af2a2c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:12:43.593565    8320 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/apiserver.key.dd3b5fb2
	I0817 21:12:43.593586    8320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 21:12:43.961679    8320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/apiserver.crt.dd3b5fb2 ...
	I0817 21:12:43.961710    8320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/apiserver.crt.dd3b5fb2: {Name:mk99554d4b3ff8f00a5d3b9b22eb59d46614ce8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:12:43.961889    8320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/apiserver.key.dd3b5fb2 ...
	I0817 21:12:43.961903    8320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/apiserver.key.dd3b5fb2: {Name:mk95cb97a7c9c212ed2421fdf750eae14bd8f3d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:12:43.961975    8320 certs.go:337] copying /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/apiserver.crt
	I0817 21:12:43.962048    8320 certs.go:341] copying /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/apiserver.key
	I0817 21:12:43.962097    8320 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/proxy-client.key
	I0817 21:12:43.962116    8320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/proxy-client.crt with IP's: []
	I0817 21:12:44.437190    8320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/proxy-client.crt ...
	I0817 21:12:44.437225    8320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/proxy-client.crt: {Name:mkbbe7c0e519e286550fc214b303515e31925e1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:12:44.437414    8320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/proxy-client.key ...
	I0817 21:12:44.437425    8320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/proxy-client.key: {Name:mk79bc3df8bdc0793956aecdbc4449929a4d23c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:12:44.437611    8320 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem (1675 bytes)
	I0817 21:12:44.437654    8320 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem (1078 bytes)
	I0817 21:12:44.437686    8320 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:12:44.437715    8320 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem (1675 bytes)
	I0817 21:12:44.438288    8320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 21:12:44.467908    8320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 21:12:44.496025    8320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 21:12:44.524308    8320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 21:12:44.552203    8320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:12:44.580726    8320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 21:12:44.608648    8320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:12:44.638010    8320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 21:12:44.669373    8320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:12:44.697945    8320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 21:12:44.718097    8320 ssh_runner.go:195] Run: openssl version
	I0817 21:12:44.725056    8320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:12:44.736212    8320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:12:44.740584    8320 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:12 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:12:44.740644    8320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:12:44.749054    8320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:12:44.761084    8320 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:12:44.765715    8320 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:12:44.765785    8320 kubeadm.go:404] StartCluster: {Name:addons-028423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-028423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:12:44.765910    8320 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 21:12:44.765971    8320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:12:44.813452    8320 cri.go:89] found id: ""
	I0817 21:12:44.813532    8320 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 21:12:44.823840    8320 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 21:12:44.834045    8320 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0817 21:12:44.834105    8320 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 21:12:44.844751    8320 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 21:12:44.844830    8320 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 21:12:44.901097    8320 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 21:12:44.901401    8320 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 21:12:44.945603    8320 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0817 21:12:44.945677    8320 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-aws
	I0817 21:12:44.945720    8320 kubeadm.go:322] OS: Linux
	I0817 21:12:44.945773    8320 kubeadm.go:322] CGROUPS_CPU: enabled
	I0817 21:12:44.945824    8320 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0817 21:12:44.945882    8320 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0817 21:12:44.945937    8320 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0817 21:12:44.945985    8320 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0817 21:12:44.946044    8320 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0817 21:12:44.946096    8320 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0817 21:12:44.946152    8320 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0817 21:12:44.946200    8320 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0817 21:12:45.027102    8320 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 21:12:45.027269    8320 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 21:12:45.027389    8320 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 21:12:45.326893    8320 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 21:12:45.330384    8320 out.go:204]   - Generating certificates and keys ...
	I0817 21:12:45.330608    8320 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 21:12:45.330740    8320 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 21:12:45.837901    8320 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0817 21:12:46.154652    8320 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0817 21:12:47.061691    8320 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0817 21:12:48.167086    8320 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0817 21:12:48.336292    8320 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0817 21:12:48.336696    8320 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-028423 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0817 21:12:48.824826    8320 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0817 21:12:48.825159    8320 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-028423 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0817 21:12:49.920613    8320 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0817 21:12:50.256121    8320 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0817 21:12:50.770988    8320 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0817 21:12:50.771293    8320 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 21:12:51.498722    8320 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 21:12:52.087213    8320 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 21:12:52.968533    8320 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 21:12:53.364938    8320 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 21:12:53.379532    8320 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:12:53.380141    8320 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:12:53.380409    8320 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 21:12:53.496359    8320 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 21:12:53.498714    8320 out.go:204]   - Booting up control plane ...
	I0817 21:12:53.498819    8320 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 21:12:53.504415    8320 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 21:12:53.505665    8320 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 21:12:53.506615    8320 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 21:12:53.509423    8320 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 21:13:01.012532    8320 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503087 seconds
	I0817 21:13:01.012645    8320 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 21:13:01.031847    8320 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 21:13:01.561504    8320 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 21:13:01.561969    8320 kubeadm.go:322] [mark-control-plane] Marking the node addons-028423 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 21:13:02.073846    8320 kubeadm.go:322] [bootstrap-token] Using token: na59d3.viwcmybt5qnb6elr
	I0817 21:13:02.075915    8320 out.go:204]   - Configuring RBAC rules ...
	I0817 21:13:02.076035    8320 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 21:13:02.081074    8320 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 21:13:02.089781    8320 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 21:13:02.098796    8320 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 21:13:02.103068    8320 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 21:13:02.111824    8320 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 21:13:02.126280    8320 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 21:13:02.357707    8320 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 21:13:02.490152    8320 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 21:13:02.492181    8320 kubeadm.go:322] 
	I0817 21:13:02.492274    8320 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 21:13:02.492288    8320 kubeadm.go:322] 
	I0817 21:13:02.492377    8320 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 21:13:02.492384    8320 kubeadm.go:322] 
	I0817 21:13:02.492408    8320 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 21:13:02.492468    8320 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 21:13:02.492537    8320 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 21:13:02.492542    8320 kubeadm.go:322] 
	I0817 21:13:02.492596    8320 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 21:13:02.492606    8320 kubeadm.go:322] 
	I0817 21:13:02.492659    8320 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 21:13:02.492663    8320 kubeadm.go:322] 
	I0817 21:13:02.492723    8320 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 21:13:02.492805    8320 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 21:13:02.492869    8320 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 21:13:02.492876    8320 kubeadm.go:322] 
	I0817 21:13:02.492972    8320 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 21:13:02.493056    8320 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 21:13:02.493061    8320 kubeadm.go:322] 
	I0817 21:13:02.493156    8320 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token na59d3.viwcmybt5qnb6elr \
	I0817 21:13:02.493271    8320 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2eedc1a02cbc836dd125235c267520d762e5fc79fb87b3b821c98b561adbc76b \
	I0817 21:13:02.493294    8320 kubeadm.go:322] 	--control-plane 
	I0817 21:13:02.493299    8320 kubeadm.go:322] 
	I0817 21:13:02.493390    8320 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 21:13:02.493399    8320 kubeadm.go:322] 
	I0817 21:13:02.493484    8320 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token na59d3.viwcmybt5qnb6elr \
	I0817 21:13:02.493604    8320 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2eedc1a02cbc836dd125235c267520d762e5fc79fb87b3b821c98b561adbc76b 
	I0817 21:13:02.501806    8320 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-aws\n", err: exit status 1
	I0817 21:13:02.501993    8320 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 21:13:02.502037    8320 cni.go:84] Creating CNI manager for ""
	I0817 21:13:02.502057    8320 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:13:02.504864    8320 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 21:13:02.506767    8320 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0817 21:13:02.512483    8320 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0817 21:13:02.512517    8320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0817 21:13:02.538406    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 21:13:03.509967    8320 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 21:13:03.510097    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:03.510223    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=addons-028423 minikube.k8s.io/updated_at=2023_08_17T21_13_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:03.707814    8320 ops.go:34] apiserver oom_adj: -16
	I0817 21:13:03.707902    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:03.801945    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:04.399746    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:04.899827    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:05.399226    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:05.899699    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:06.400060    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:06.899171    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:07.399863    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:07.899720    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:08.400170    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:08.899685    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:09.399478    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:09.900148    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:10.400001    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:10.899680    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:11.399409    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:11.900046    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:12.400065    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:12.899968    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:13.399294    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:13.899928    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:14.399199    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:14.899198    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:15.399830    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:15.899919    8320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:13:16.021982    8320 kubeadm.go:1081] duration metric: took 12.511920075s to wait for elevateKubeSystemPrivileges.
	I0817 21:13:16.022005    8320 kubeadm.go:406] StartCluster complete in 31.256241644s
	I0817 21:13:16.022020    8320 settings.go:142] acquiring lock: {Name:mk7a5a07825601654f691495799b769adb4489ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:13:16.022139    8320 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:13:16.022509    8320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/kubeconfig: {Name:mkf341824bbe915f226637e75b19e0928287e2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:13:16.022920    8320 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 21:13:16.023184    8320 config.go:182] Loaded profile config "addons-028423": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:13:16.023215    8320 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0817 21:13:16.023286    8320 addons.go:69] Setting volumesnapshots=true in profile "addons-028423"
	I0817 21:13:16.023299    8320 addons.go:231] Setting addon volumesnapshots=true in "addons-028423"
	I0817 21:13:16.023333    8320 host.go:66] Checking if "addons-028423" exists ...
	I0817 21:13:16.023833    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:13:16.024125    8320 addons.go:69] Setting ingress=true in profile "addons-028423"
	I0817 21:13:16.024142    8320 addons.go:231] Setting addon ingress=true in "addons-028423"
	I0817 21:13:16.024189    8320 host.go:66] Checking if "addons-028423" exists ...
	I0817 21:13:16.024567    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:13:16.024633    8320 addons.go:69] Setting cloud-spanner=true in profile "addons-028423"
	I0817 21:13:16.024643    8320 addons.go:231] Setting addon cloud-spanner=true in "addons-028423"
	I0817 21:13:16.024666    8320 host.go:66] Checking if "addons-028423" exists ...
	I0817 21:13:16.024991    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:13:16.025049    8320 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-028423"
	I0817 21:13:16.025073    8320 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-028423"
	I0817 21:13:16.025098    8320 host.go:66] Checking if "addons-028423" exists ...
	I0817 21:13:16.025442    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:13:16.025496    8320 addons.go:69] Setting default-storageclass=true in profile "addons-028423"
	I0817 21:13:16.025506    8320 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-028423"
	I0817 21:13:16.025735    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:13:16.025795    8320 addons.go:69] Setting gcp-auth=true in profile "addons-028423"
	I0817 21:13:16.025807    8320 mustload.go:65] Loading cluster: addons-028423
	I0817 21:13:16.025950    8320 config.go:182] Loaded profile config "addons-028423": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:13:16.026138    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:13:16.026199    8320 addons.go:69] Setting metrics-server=true in profile "addons-028423"
	I0817 21:13:16.026209    8320 addons.go:231] Setting addon metrics-server=true in "addons-028423"
	I0817 21:13:16.026233    8320 host.go:66] Checking if "addons-028423" exists ...
	I0817 21:13:16.026568    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:13:16.026867    8320 addons.go:69] Setting registry=true in profile "addons-028423"
	I0817 21:13:16.026885    8320 addons.go:231] Setting addon registry=true in "addons-028423"
	I0817 21:13:16.026913    8320 host.go:66] Checking if "addons-028423" exists ...
	I0817 21:13:16.027284    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:13:16.027346    8320 addons.go:69] Setting storage-provisioner=true in profile "addons-028423"
	I0817 21:13:16.027354    8320 addons.go:231] Setting addon storage-provisioner=true in "addons-028423"
	I0817 21:13:16.027377    8320 host.go:66] Checking if "addons-028423" exists ...
	I0817 21:13:16.027747    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:13:16.027820    8320 addons.go:69] Setting ingress-dns=true in profile "addons-028423"
	I0817 21:13:16.027829    8320 addons.go:231] Setting addon ingress-dns=true in "addons-028423"
	I0817 21:13:16.027858    8320 host.go:66] Checking if "addons-028423" exists ...
	I0817 21:13:16.028190    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:13:16.038752    8320 addons.go:69] Setting inspektor-gadget=true in profile "addons-028423"
	I0817 21:13:16.038783    8320 addons.go:231] Setting addon inspektor-gadget=true in "addons-028423"
	I0817 21:13:16.038828    8320 host.go:66] Checking if "addons-028423" exists ...
	I0817 21:13:16.039375    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:13:16.095346    8320 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0817 21:13:16.102070    8320 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0817 21:13:16.102141    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0817 21:13:16.102244    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:13:16.133274    8320 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0817 21:13:16.136075    8320 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 21:13:16.136155    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 21:13:16.136261    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:13:16.186563    8320 out.go:177]   - Using image docker.io/registry:2.8.1
	I0817 21:13:16.190731    8320 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0817 21:13:16.196710    8320 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0817 21:13:16.196809    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0817 21:13:16.197041    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:13:16.215263    8320 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0817 21:13:16.222677    8320 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0817 21:13:16.222740    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0817 21:13:16.222835    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:13:16.237397    8320 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0817 21:13:16.240477    8320 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0817 21:13:16.243096    8320 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0817 21:13:16.244769    8320 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0817 21:13:16.247057    8320 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0817 21:13:16.249397    8320 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0817 21:13:16.259009    8320 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0817 21:13:16.261192    8320 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0817 21:13:16.275141    8320 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0817 21:13:16.275171    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0817 21:13:16.275231    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:13:16.275080    8320 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-028423" context rescaled to 1 replicas
	I0817 21:13:16.275379    8320 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0817 21:13:16.297115    8320 out.go:177] * Verifying Kubernetes components...
	I0817 21:13:16.312245    8320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:13:16.304862    8320 addons.go:231] Setting addon default-storageclass=true in "addons-028423"
	I0817 21:13:16.312547    8320 host.go:66] Checking if "addons-028423" exists ...
	I0817 21:13:16.313042    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:13:16.305077    8320 host.go:66] Checking if "addons-028423" exists ...
	I0817 21:13:16.365219    8320 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:13:16.375313    8320 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:13:16.375386    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 21:13:16.375470    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:13:16.379887    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:13:16.394741    8320 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0817 21:13:16.398802    8320 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0817 21:13:16.398832    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0817 21:13:16.398900    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:13:16.432740    8320 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0817 21:13:16.441796    8320 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0817 21:13:16.432088    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:13:16.460906    8320 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0817 21:13:16.468483    8320 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0817 21:13:16.468506    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0817 21:13:16.468566    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:13:16.461807    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:13:16.490764    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:13:16.495409    8320 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 21:13:16.495437    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 21:13:16.495504    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:13:16.524074    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:13:16.534074    8320 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0817 21:13:16.537501    8320 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0817 21:13:16.537525    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0817 21:13:16.537588    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:13:16.558815    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:13:16.562992    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:13:16.604033    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:13:16.608066    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:13:16.612646    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:13:16.657636    8320 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 21:13:16.658346    8320 node_ready.go:35] waiting up to 6m0s for node "addons-028423" to be "Ready" ...
	I0817 21:13:16.680730    8320 node_ready.go:49] node "addons-028423" has status "Ready":"True"
	I0817 21:13:16.680801    8320 node_ready.go:38] duration metric: took 22.434206ms waiting for node "addons-028423" to be "Ready" ...
	I0817 21:13:16.680824    8320 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:13:16.692593    8320 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace to be "Ready" ...
	I0817 21:13:17.082735    8320 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 21:13:17.082754    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0817 21:13:17.121402    8320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:13:17.186035    8320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0817 21:13:17.217976    8320 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0817 21:13:17.218043    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0817 21:13:17.271745    8320 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0817 21:13:17.271812    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0817 21:13:17.300753    8320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0817 21:13:17.301563    8320 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0817 21:13:17.301618    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0817 21:13:17.360622    8320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0817 21:13:17.447285    8320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 21:13:17.452983    8320 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0817 21:13:17.453003    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0817 21:13:17.498405    8320 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 21:13:17.498429    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 21:13:17.501185    8320 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0817 21:13:17.501206    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0817 21:13:17.510885    8320 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0817 21:13:17.510911    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0817 21:13:17.635997    8320 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0817 21:13:17.636023    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0817 21:13:17.643015    8320 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 21:13:17.643039    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 21:13:17.684428    8320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0817 21:13:17.761522    8320 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0817 21:13:17.761548    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0817 21:13:17.846158    8320 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0817 21:13:17.846182    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0817 21:13:17.873579    8320 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0817 21:13:17.873605    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0817 21:13:17.885134    8320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 21:13:18.007144    8320 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0817 21:13:18.007221    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0817 21:13:18.132580    8320 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0817 21:13:18.132658    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0817 21:13:18.172907    8320 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0817 21:13:18.172979    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0817 21:13:18.281061    8320 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0817 21:13:18.281127    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0817 21:13:18.348416    8320 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0817 21:13:18.348507    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0817 21:13:18.352237    8320 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 21:13:18.352323    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0817 21:13:18.482943    8320 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0817 21:13:18.483002    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0817 21:13:18.547968    8320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 21:13:18.615681    8320 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0817 21:13:18.615751    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0817 21:13:18.713128    8320 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0817 21:13:18.713195    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0817 21:13:18.735544    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:18.821725    8320 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0817 21:13:18.821793    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0817 21:13:18.939255    8320 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0817 21:13:18.939313    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0817 21:13:18.981488    8320 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.323802538s)
	I0817 21:13:18.981518    8320 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0817 21:13:19.042551    8320 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0817 21:13:19.042575    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0817 21:13:19.134432    8320 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0817 21:13:19.134455    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0817 21:13:19.238317    8320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0817 21:13:19.311388    8320 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 21:13:19.311410    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0817 21:13:19.388841    8320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 21:13:19.772212    8320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.650727232s)
	I0817 21:13:19.772296    8320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.586191975s)
	I0817 21:13:20.735665    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:22.760099    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:22.984720    8320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.683883167s)
	I0817 21:13:22.985119    8320 addons.go:467] Verifying addon ingress=true in "addons-028423"
	I0817 21:13:22.988945    8320 out.go:177] * Verifying ingress addon...
	I0817 21:13:22.985149    8320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.537544133s)
	I0817 21:13:22.984896    8320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.300441722s)
	I0817 21:13:22.984950    8320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.099790729s)
	I0817 21:13:22.985045    8320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.436999042s)
	I0817 21:13:22.985092    8320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.746751753s)
	I0817 21:13:22.984831    8320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.624139892s)
	I0817 21:13:22.992226    8320 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0817 21:13:22.989146    8320 addons.go:467] Verifying addon registry=true in "addons-028423"
	I0817 21:13:22.989187    8320 addons.go:467] Verifying addon metrics-server=true in "addons-028423"
	W0817 21:13:22.989279    8320 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0817 21:13:22.994809    8320 out.go:177] * Verifying registry addon...
	I0817 21:13:22.994786    8320 retry.go:31] will retry after 278.39744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0817 21:13:22.998752    8320 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0817 21:13:22.996572    8320 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0817 21:13:22.998996    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:23.011870    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:23.012519    8320 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0817 21:13:23.012531    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:23.016910    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:23.157074    8320 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0817 21:13:23.157197    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:13:23.194266    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:13:23.273710    8320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 21:13:23.387298    8320 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0817 21:13:23.500290    8320 addons.go:231] Setting addon gcp-auth=true in "addons-028423"
	I0817 21:13:23.500383    8320 host.go:66] Checking if "addons-028423" exists ...
	I0817 21:13:23.500909    8320 cli_runner.go:164] Run: docker container inspect addons-028423 --format={{.State.Status}}
	I0817 21:13:23.518496    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:23.532510    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:23.552186    8320 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0817 21:13:23.552243    8320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028423
	I0817 21:13:23.587552    8320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/addons-028423/id_rsa Username:docker}
	I0817 21:13:24.053731    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:24.056531    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:24.524514    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:24.527473    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:24.725948    8320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.337039358s)
	I0817 21:13:24.726028    8320 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-028423"
	I0817 21:13:24.728531    8320 out.go:177] * Verifying csi-hostpath-driver addon...
	I0817 21:13:24.731863    8320 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0817 21:13:24.739547    8320 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0817 21:13:24.739648    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:24.745245    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:25.018942    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:25.031555    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:25.100768    8320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.826965813s)
	I0817 21:13:25.100840    8320 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.548628292s)
	I0817 21:13:25.104601    8320 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0817 21:13:25.107167    8320 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0817 21:13:25.109408    8320 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0817 21:13:25.109430    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0817 21:13:25.142260    8320 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0817 21:13:25.142281    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0817 21:13:25.179744    8320 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 21:13:25.179762    8320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0817 21:13:25.204831    8320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 21:13:25.234829    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:25.251998    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:25.519844    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:25.523187    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:25.752645    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:26.036542    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:26.041821    8320 addons.go:467] Verifying addon gcp-auth=true in "addons-028423"
	I0817 21:13:26.044224    8320 out.go:177] * Verifying gcp-auth addon...
	I0817 21:13:26.045394    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:26.047307    8320 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0817 21:13:26.060051    8320 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0817 21:13:26.060071    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:26.067418    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:26.251301    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:26.518317    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:26.523702    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:26.571969    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:26.753415    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:27.018001    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:27.022463    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:27.071681    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:27.235516    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:27.251990    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:27.516671    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:27.522660    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:27.572447    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:27.751473    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:28.021009    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:28.025782    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:28.071898    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:28.252937    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:28.524974    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:28.528378    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:28.574600    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:28.753250    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:29.019479    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:29.024496    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:29.074126    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:29.238224    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:29.252473    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:29.519014    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:29.523481    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:29.574067    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:29.766551    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:30.019469    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:30.025838    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:30.073010    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:30.252617    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:30.517946    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:30.523297    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:30.572773    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:30.752153    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:31.017425    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:31.023100    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:31.072296    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:31.252961    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:31.517670    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:31.523154    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:31.573206    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:31.735333    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:31.751540    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:32.018104    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:32.023056    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:32.072238    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:32.252189    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:32.517616    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:32.522855    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:32.572911    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:32.751850    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:33.017680    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:33.023222    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:33.072601    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:33.255187    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:33.517610    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:33.525045    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:33.573042    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:33.750990    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:34.016712    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:34.023130    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:34.072562    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:34.235175    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:34.253293    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:34.522961    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:34.526463    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:34.571994    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:34.752068    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:35.017511    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:35.023621    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:35.073829    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:35.252110    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:35.519566    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:35.531953    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:35.573582    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:35.755329    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:36.025917    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:36.027165    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:36.072034    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:36.259651    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:36.261011    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:36.517030    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:36.521614    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:36.571892    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:36.751076    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:37.018579    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:37.023342    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:37.071057    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:37.251397    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:37.516857    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:37.522927    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:37.572149    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:37.750988    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:38.016404    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:38.022368    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:38.071489    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:38.251197    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:38.516442    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:38.522046    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:38.572267    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:38.735396    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:38.751582    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:39.016538    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:39.022253    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:39.072190    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:39.250613    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:39.516374    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:39.521961    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:39.572981    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:39.751275    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:40.026630    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:40.027128    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:40.071600    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:40.251230    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:40.516896    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:40.521412    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:40.571896    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:40.751626    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:41.016182    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:41.021408    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:41.071085    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:41.234738    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:41.250806    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:41.517154    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:41.523198    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:41.572755    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:41.750646    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:42.017235    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:42.021664    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:42.071418    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:42.256409    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:42.516550    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:42.522021    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:42.571611    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:42.751006    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:43.017306    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:43.022171    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:43.071954    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:43.252492    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:43.516885    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:43.522311    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:43.572646    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:43.734219    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:43.750437    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:44.017176    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:44.021671    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:44.071650    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:44.250901    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:44.516461    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:44.521965    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:44.571513    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:44.750650    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:45.017805    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:45.038536    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:45.072239    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:45.252348    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:45.516981    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:45.521175    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:45.572754    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:45.734430    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:45.750742    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:46.016359    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:46.022193    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:46.071479    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:46.251222    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:46.516790    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:46.522180    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:46.571987    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:46.750397    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:47.016984    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:47.021426    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:47.071018    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:47.254941    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:47.516129    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:47.521426    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:47.571957    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:47.738182    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:47.761597    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:48.016625    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:48.022299    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:48.071136    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:48.251169    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:48.516723    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:48.522224    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:48.572612    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:48.751324    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:49.017156    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:49.021424    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:49.071456    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:49.251289    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:49.516527    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:49.522181    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:49.572446    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:49.751387    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:50.025094    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:50.026092    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:50.071576    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:50.234958    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:50.251338    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:50.517329    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:50.521763    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:50.571834    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:50.752486    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:51.026515    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:51.027193    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:51.074200    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:51.251886    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:51.516770    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:51.522251    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:51.573877    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:51.752087    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:52.016791    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:52.022352    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:52.071113    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:52.235131    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:52.253131    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:52.516829    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:52.522867    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:52.571703    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:52.752880    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:53.016523    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:53.024313    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:53.073928    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:53.251383    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:53.517168    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:53.521854    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:53.572547    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:53.751748    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:54.018094    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:54.024011    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:54.070984    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:54.250511    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:54.516796    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:54.522529    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:54.571225    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:54.739337    8320 pod_ready.go:102] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"False"
	I0817 21:13:54.753077    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:55.037005    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:55.039060    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:55.074983    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:55.234936    8320 pod_ready.go:92] pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace has status "Ready":"True"
	I0817 21:13:55.234960    8320 pod_ready.go:81] duration metric: took 38.542300844s waiting for pod "coredns-5d78c9869d-hmv5z" in "kube-system" namespace to be "Ready" ...
	I0817 21:13:55.234972    8320 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-xpfl8" in "kube-system" namespace to be "Ready" ...
	I0817 21:13:55.237726    8320 pod_ready.go:97] error getting pod "coredns-5d78c9869d-xpfl8" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-xpfl8" not found
	I0817 21:13:55.237750    8320 pod_ready.go:81] duration metric: took 2.771497ms waiting for pod "coredns-5d78c9869d-xpfl8" in "kube-system" namespace to be "Ready" ...
	E0817 21:13:55.237762    8320 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5d78c9869d-xpfl8" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-xpfl8" not found
	I0817 21:13:55.237768    8320 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-028423" in "kube-system" namespace to be "Ready" ...
	I0817 21:13:55.244749    8320 pod_ready.go:92] pod "etcd-addons-028423" in "kube-system" namespace has status "Ready":"True"
	I0817 21:13:55.244773    8320 pod_ready.go:81] duration metric: took 6.997969ms waiting for pod "etcd-addons-028423" in "kube-system" namespace to be "Ready" ...
	I0817 21:13:55.244787    8320 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-028423" in "kube-system" namespace to be "Ready" ...
	I0817 21:13:55.256495    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:55.261935    8320 pod_ready.go:92] pod "kube-apiserver-addons-028423" in "kube-system" namespace has status "Ready":"True"
	I0817 21:13:55.261959    8320 pod_ready.go:81] duration metric: took 17.163603ms waiting for pod "kube-apiserver-addons-028423" in "kube-system" namespace to be "Ready" ...
	I0817 21:13:55.261970    8320 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-028423" in "kube-system" namespace to be "Ready" ...
	I0817 21:13:55.269945    8320 pod_ready.go:92] pod "kube-controller-manager-addons-028423" in "kube-system" namespace has status "Ready":"True"
	I0817 21:13:55.269970    8320 pod_ready.go:81] duration metric: took 7.991928ms waiting for pod "kube-controller-manager-addons-028423" in "kube-system" namespace to be "Ready" ...
	I0817 21:13:55.269982    8320 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dx7s9" in "kube-system" namespace to be "Ready" ...
	I0817 21:13:55.432667    8320 pod_ready.go:92] pod "kube-proxy-dx7s9" in "kube-system" namespace has status "Ready":"True"
	I0817 21:13:55.432690    8320 pod_ready.go:81] duration metric: took 162.701606ms waiting for pod "kube-proxy-dx7s9" in "kube-system" namespace to be "Ready" ...
	I0817 21:13:55.432701    8320 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-028423" in "kube-system" namespace to be "Ready" ...
	I0817 21:13:55.517513    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:55.522400    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:55.572235    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:55.751404    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:55.832286    8320 pod_ready.go:92] pod "kube-scheduler-addons-028423" in "kube-system" namespace has status "Ready":"True"
	I0817 21:13:55.832314    8320 pod_ready.go:81] duration metric: took 399.60573ms waiting for pod "kube-scheduler-addons-028423" in "kube-system" namespace to be "Ready" ...
	I0817 21:13:55.832323    8320 pod_ready.go:38] duration metric: took 39.151475649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:13:55.832371    8320 api_server.go:52] waiting for apiserver process to appear ...
	I0817 21:13:55.832450    8320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:13:55.847468    8320 api_server.go:72] duration metric: took 39.572059323s to wait for apiserver process to appear ...
	I0817 21:13:55.847494    8320 api_server.go:88] waiting for apiserver healthz status ...
	I0817 21:13:55.847511    8320 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 21:13:55.856922    8320 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 21:13:55.859208    8320 api_server.go:141] control plane version: v1.27.4
	I0817 21:13:55.859240    8320 api_server.go:131] duration metric: took 11.73881ms to wait for apiserver health ...
	I0817 21:13:55.859250    8320 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 21:13:56.018484    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:56.024470    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:56.041652    8320 system_pods.go:59] 17 kube-system pods found
	I0817 21:13:56.041707    8320 system_pods.go:61] "coredns-5d78c9869d-hmv5z" [f79fbfc1-82bd-4212-af73-6327944ec150] Running
	I0817 21:13:56.041720    8320 system_pods.go:61] "csi-hostpath-attacher-0" [cbdd4dd2-fdc7-4cfe-8591-74262c927b8d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0817 21:13:56.041729    8320 system_pods.go:61] "csi-hostpath-resizer-0" [a149579d-9560-49d7-8383-07cabe1736d8] Running
	I0817 21:13:56.041740    8320 system_pods.go:61] "csi-hostpathplugin-nlmq9" [6085f8cb-51dc-4b78-be86-6a5d25d71c36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0817 21:13:56.041761    8320 system_pods.go:61] "etcd-addons-028423" [788c1e5d-bc9b-43d0-b7e2-1ed4b7fbb87d] Running
	I0817 21:13:56.041773    8320 system_pods.go:61] "kindnet-sfhzs" [653d1442-ce0c-4386-8017-ae7c0b00a30c] Running
	I0817 21:13:56.041778    8320 system_pods.go:61] "kube-apiserver-addons-028423" [ce3a10bf-da27-452e-bf15-2bb028ccc6a6] Running
	I0817 21:13:56.041791    8320 system_pods.go:61] "kube-controller-manager-addons-028423" [9efd06d4-f93f-452c-a494-b4c0ceceb900] Running
	I0817 21:13:56.041798    8320 system_pods.go:61] "kube-ingress-dns-minikube" [c87a0d3c-c59c-4167-9c63-c8ae229fff97] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0817 21:13:56.041811    8320 system_pods.go:61] "kube-proxy-dx7s9" [fb2890ae-00d3-44e1-8c49-b89dbffa50e3] Running
	I0817 21:13:56.041817    8320 system_pods.go:61] "kube-scheduler-addons-028423" [4497b4a5-1ee9-4f86-b5c8-9e7e77041261] Running
	I0817 21:13:56.041824    8320 system_pods.go:61] "metrics-server-7746886d4f-5842c" [a073cb3c-d435-43d6-8c02-49700ae1503f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 21:13:56.041833    8320 system_pods.go:61] "registry-jjtqn" [cd975e55-332b-4a73-a6cf-587df43db3a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0817 21:13:56.041846    8320 system_pods.go:61] "registry-proxy-7lbds" [ca601be3-5325-4d4e-9c0c-84985c85a22f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 21:13:56.041852    8320 system_pods.go:61] "snapshot-controller-75bbb956b9-dzrh7" [e0a558f2-a47e-4d8e-b6bc-89132e37cd79] Running
	I0817 21:13:56.041863    8320 system_pods.go:61] "snapshot-controller-75bbb956b9-jdnrq" [ac0ecfe5-e342-40d3-bf70-9b7d4973a355] Running
	I0817 21:13:56.041868    8320 system_pods.go:61] "storage-provisioner" [2325f2a8-1c3e-413a-96b7-afd0b8f19d32] Running
	I0817 21:13:56.041874    8320 system_pods.go:74] duration metric: took 182.619295ms to wait for pod list to return data ...
	I0817 21:13:56.041886    8320 default_sa.go:34] waiting for default service account to be created ...
	I0817 21:13:56.073333    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:56.237714    8320 default_sa.go:45] found service account: "default"
	I0817 21:13:56.237737    8320 default_sa.go:55] duration metric: took 195.845932ms for default service account to be created ...
	I0817 21:13:56.237747    8320 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 21:13:56.252025    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:56.444120    8320 system_pods.go:86] 17 kube-system pods found
	I0817 21:13:56.444151    8320 system_pods.go:89] "coredns-5d78c9869d-hmv5z" [f79fbfc1-82bd-4212-af73-6327944ec150] Running
	I0817 21:13:56.444163    8320 system_pods.go:89] "csi-hostpath-attacher-0" [cbdd4dd2-fdc7-4cfe-8591-74262c927b8d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0817 21:13:56.444170    8320 system_pods.go:89] "csi-hostpath-resizer-0" [a149579d-9560-49d7-8383-07cabe1736d8] Running
	I0817 21:13:56.444180    8320 system_pods.go:89] "csi-hostpathplugin-nlmq9" [6085f8cb-51dc-4b78-be86-6a5d25d71c36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0817 21:13:56.444191    8320 system_pods.go:89] "etcd-addons-028423" [788c1e5d-bc9b-43d0-b7e2-1ed4b7fbb87d] Running
	I0817 21:13:56.444197    8320 system_pods.go:89] "kindnet-sfhzs" [653d1442-ce0c-4386-8017-ae7c0b00a30c] Running
	I0817 21:13:56.444207    8320 system_pods.go:89] "kube-apiserver-addons-028423" [ce3a10bf-da27-452e-bf15-2bb028ccc6a6] Running
	I0817 21:13:56.444213    8320 system_pods.go:89] "kube-controller-manager-addons-028423" [9efd06d4-f93f-452c-a494-b4c0ceceb900] Running
	I0817 21:13:56.444222    8320 system_pods.go:89] "kube-ingress-dns-minikube" [c87a0d3c-c59c-4167-9c63-c8ae229fff97] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0817 21:13:56.444232    8320 system_pods.go:89] "kube-proxy-dx7s9" [fb2890ae-00d3-44e1-8c49-b89dbffa50e3] Running
	I0817 21:13:56.444237    8320 system_pods.go:89] "kube-scheduler-addons-028423" [4497b4a5-1ee9-4f86-b5c8-9e7e77041261] Running
	I0817 21:13:56.444256    8320 system_pods.go:89] "metrics-server-7746886d4f-5842c" [a073cb3c-d435-43d6-8c02-49700ae1503f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 21:13:56.444268    8320 system_pods.go:89] "registry-jjtqn" [cd975e55-332b-4a73-a6cf-587df43db3a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0817 21:13:56.444276    8320 system_pods.go:89] "registry-proxy-7lbds" [ca601be3-5325-4d4e-9c0c-84985c85a22f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 21:13:56.444282    8320 system_pods.go:89] "snapshot-controller-75bbb956b9-dzrh7" [e0a558f2-a47e-4d8e-b6bc-89132e37cd79] Running
	I0817 21:13:56.444289    8320 system_pods.go:89] "snapshot-controller-75bbb956b9-jdnrq" [ac0ecfe5-e342-40d3-bf70-9b7d4973a355] Running
	I0817 21:13:56.444299    8320 system_pods.go:89] "storage-provisioner" [2325f2a8-1c3e-413a-96b7-afd0b8f19d32] Running
	I0817 21:13:56.444305    8320 system_pods.go:126] duration metric: took 206.552702ms to wait for k8s-apps to be running ...
	I0817 21:13:56.444316    8320 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 21:13:56.444373    8320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:13:56.461273    8320 system_svc.go:56] duration metric: took 16.946147ms WaitForService to wait for kubelet.
	I0817 21:13:56.461303    8320 kubeadm.go:581] duration metric: took 40.185898158s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 21:13:56.461322    8320 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:13:56.516673    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:56.522185    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:56.571236    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:56.631512    8320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0817 21:13:56.631559    8320 node_conditions.go:123] node cpu capacity is 2
	I0817 21:13:56.631572    8320 node_conditions.go:105] duration metric: took 170.244894ms to run NodePressure ...
	I0817 21:13:56.631584    8320 start.go:228] waiting for startup goroutines ...
	I0817 21:13:56.751394    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:57.018006    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:57.021522    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:57.071950    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:57.257012    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:57.516940    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:57.522761    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:57.572807    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:57.752262    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:58.017965    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:58.022431    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:58.072154    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:58.251907    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:58.517373    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:58.521542    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:58.572754    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:58.751129    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:59.022654    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:59.023904    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:59.072578    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:59.253274    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:13:59.517539    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:13:59.522713    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:13:59.572562    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:13:59.759392    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:00.029519    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:14:00.040957    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:00.084701    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:00.251766    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:00.516933    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:14:00.522389    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:00.570898    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:00.752477    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:01.023813    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:14:01.032034    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:01.071642    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:01.255136    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:01.516914    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:14:01.522165    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:01.571921    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:01.751177    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:02.016994    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:14:02.021541    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:02.074962    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:02.250994    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:02.516751    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:14:02.522478    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:02.572248    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:02.751262    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:03.019434    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:14:03.024604    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:03.073485    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:03.252184    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:03.517781    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:14:03.522649    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:03.571687    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:03.750969    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:04.017286    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:14:04.021545    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:04.071685    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:04.251629    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:04.516318    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:14:04.521665    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:04.573007    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:04.751167    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:05.017339    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:14:05.022571    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:05.071435    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:05.251787    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:05.517117    8320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:14:05.522170    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:05.572138    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:05.752474    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:06.017948    8320 kapi.go:107] duration metric: took 43.025720748s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0817 21:14:06.022406    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:06.071438    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:06.251736    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:06.521733    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:06.572174    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:06.751259    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:07.022089    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:07.071276    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:07.252580    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:07.529150    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:07.572456    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:07.758436    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:08.024851    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:08.071641    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:08.254836    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:08.522855    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:08.570909    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:08.753644    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:09.026280    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:09.073664    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:09.252348    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:09.526544    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:09.575272    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:14:09.754774    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:10.031810    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:10.073921    8320 kapi.go:107] duration metric: took 44.026608119s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0817 21:14:10.076323    8320 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-028423 cluster.
	I0817 21:14:10.077617    8320 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0817 21:14:10.080223    8320 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0817 21:14:10.251820    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:10.521935    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:10.751054    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:11.024104    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:11.251684    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:11.524015    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:11.751846    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:12.023090    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:12.252399    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:12.522473    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:12.751877    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:13.049925    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:13.251250    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:13.522814    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:13.750428    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:14.022597    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:14.251230    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:14.521665    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:14.752294    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:15.025002    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:15.252453    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:15.522384    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:15.751796    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:16.022603    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:16.251295    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:16.521911    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:16.751847    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:17.023457    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:17.251802    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:17.521536    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:17.752133    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:18.025498    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:18.251493    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:18.521727    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:18.751545    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:19.022132    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:14:19.251917    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:19.522117    8320 kapi.go:107] duration metric: took 56.523361783s to wait for kubernetes.io/minikube-addons=registry ...
	I0817 21:14:19.750602    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:20.251291    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:20.751035    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:21.250938    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:21.751603    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:22.253898    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:22.750763    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:23.251854    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:23.750930    8320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:14:24.250691    8320 kapi.go:107] duration metric: took 59.518824289s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0817 21:14:24.253114    8320 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, default-storageclass, inspektor-gadget, ingress-dns, metrics-server, volumesnapshots, ingress, gcp-auth, registry, csi-hostpath-driver
	I0817 21:14:24.255133    8320 addons.go:502] enable addons completed in 1m8.231913399s: enabled=[storage-provisioner cloud-spanner default-storageclass inspektor-gadget ingress-dns metrics-server volumesnapshots ingress gcp-auth registry csi-hostpath-driver]
	I0817 21:14:24.255188    8320 start.go:233] waiting for cluster config update ...
	I0817 21:14:24.255211    8320 start.go:242] writing updated cluster config ...
	I0817 21:14:24.255538    8320 ssh_runner.go:195] Run: rm -f paused
	I0817 21:14:24.621258    8320 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 21:14:24.624657    8320 out.go:177] * Done! kubectl is now configured to use "addons-028423" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d64a9e54fd1f7       13753a81eccfd       9 seconds ago        Exited              hello-world-app           2                   8da6f618c4682       hello-world-app-65bdb79f98-jtcjq
	5a591e302702e       397432849901d       33 seconds ago       Running             nginx                     0                   3ce5026983872       nginx
	ba7bec9fcc1da       71e15c1ff4390       About a minute ago   Running             headlamp                  0                   ee66e3e8751cc       headlamp-5c78f74d8d-trhmd
	aa0818e010c32       4206ae70dd039       2 minutes ago        Exited              registry                  0                   7de270e181930       registry-jjtqn
	cdf9e954066df       2a5f29343eb03       2 minutes ago        Running             gcp-auth                  0                   91d8a77066c80       gcp-auth-58478865f7-gcb2k
	a50fa3d5b5c16       8f2588812ab29       2 minutes ago        Exited              patch                     0                   fce9853b2b892       ingress-nginx-admission-patch-j7kfw
	a7d51f06e469e       8f2588812ab29       2 minutes ago        Exited              create                    0                   ac0d1ef9a3f17       ingress-nginx-admission-create-456sr
	96a1d08e3a572       97e04611ad434       2 minutes ago        Running             coredns                   0                   9a244999768fd       coredns-5d78c9869d-hmv5z
	4c8892487f29b       ba04bb24b9575       3 minutes ago        Running             storage-provisioner       0                   a80aaae03a969       storage-provisioner
	081adba6617c3       b18bf71b941ba       3 minutes ago        Running             kindnet-cni               0                   cf7ccae5e29bd       kindnet-sfhzs
	761b2f804ee55       532e5a30e948f       3 minutes ago        Running             kube-proxy                0                   cd61be4e64a2c       kube-proxy-dx7s9
	64f8f17d1935a       64aece92d6bde       3 minutes ago        Running             kube-apiserver            0                   89f626e1305eb       kube-apiserver-addons-028423
	d654998b5ea86       389f6f052cf83       3 minutes ago        Running             kube-controller-manager   0                   4f6935dc64041       kube-controller-manager-addons-028423
	550b99e448b0d       6eb63895cb67f       3 minutes ago        Running             kube-scheduler            0                   34cf0fca14564       kube-scheduler-addons-028423
	d923da00541d5       24bc64e911039       3 minutes ago        Running             etcd                      0                   b83c4866812f2       etcd-addons-028423
	
	* 
	* ==> containerd <==
	* Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.520033033Z" level=info msg="StopContainer for \"af5aabba4ed46137cecf3b1f64fd643fdcb55aaac29e69acfcdcc5a6eb458305\" returns successfully"
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.520718732Z" level=info msg="StopPodSandbox for \"881f8b50bf920e7d3a2ce8618d031bca516c431b2c053e56bd2eb6290d6fce13\""
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.520930208Z" level=info msg="Container to stop \"af5aabba4ed46137cecf3b1f64fd643fdcb55aaac29e69acfcdcc5a6eb458305\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.528633143Z" level=info msg="shim disconnected" id=7de270e1819305d8f411986d56ba25c229aa1b8fe1e8f0c5d166b648d816028d
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.529230786Z" level=warning msg="cleaning up after shim disconnected" id=7de270e1819305d8f411986d56ba25c229aa1b8fe1e8f0c5d166b648d816028d namespace=k8s.io
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.529353812Z" level=info msg="cleaning up dead shim"
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.557606045Z" level=warning msg="cleanup warnings time=\"2023-08-17T21:16:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10662 runtime=io.containerd.runc.v2\n"
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.570237028Z" level=info msg="shim disconnected" id=4547bc71d50dabdb5ade973e6fffdaeb44612424c78024a675906d17a92777af
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.570464159Z" level=warning msg="cleaning up after shim disconnected" id=4547bc71d50dabdb5ade973e6fffdaeb44612424c78024a675906d17a92777af namespace=k8s.io
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.570556646Z" level=info msg="cleaning up dead shim"
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.595799051Z" level=warning msg="cleanup warnings time=\"2023-08-17T21:16:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10714 runtime=io.containerd.runc.v2\n"
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.602194800Z" level=info msg="shim disconnected" id=881f8b50bf920e7d3a2ce8618d031bca516c431b2c053e56bd2eb6290d6fce13
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.605756627Z" level=warning msg="cleaning up after shim disconnected" id=881f8b50bf920e7d3a2ce8618d031bca516c431b2c053e56bd2eb6290d6fce13 namespace=k8s.io
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.605790481Z" level=info msg="cleaning up dead shim"
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.618345001Z" level=warning msg="cleanup warnings time=\"2023-08-17T21:16:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10741 runtime=io.containerd.runc.v2\n"
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.620177487Z" level=info msg="TearDown network for sandbox \"7de270e1819305d8f411986d56ba25c229aa1b8fe1e8f0c5d166b648d816028d\" successfully"
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.620311836Z" level=info msg="StopPodSandbox for \"7de270e1819305d8f411986d56ba25c229aa1b8fe1e8f0c5d166b648d816028d\" returns successfully"
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.673904213Z" level=info msg="TearDown network for sandbox \"4547bc71d50dabdb5ade973e6fffdaeb44612424c78024a675906d17a92777af\" successfully"
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.673948569Z" level=info msg="StopPodSandbox for \"4547bc71d50dabdb5ade973e6fffdaeb44612424c78024a675906d17a92777af\" returns successfully"
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.687380482Z" level=info msg="TearDown network for sandbox \"881f8b50bf920e7d3a2ce8618d031bca516c431b2c053e56bd2eb6290d6fce13\" successfully"
	Aug 17 21:16:31 addons-028423 containerd[743]: time="2023-08-17T21:16:31.687434964Z" level=info msg="StopPodSandbox for \"881f8b50bf920e7d3a2ce8618d031bca516c431b2c053e56bd2eb6290d6fce13\" returns successfully"
	Aug 17 21:16:32 addons-028423 containerd[743]: time="2023-08-17T21:16:32.611303756Z" level=info msg="RemoveContainer for \"03d0a332540b7672e64a0f9744c075aead5e1d109897eb06f3708517aebd5083\""
	Aug 17 21:16:32 addons-028423 containerd[743]: time="2023-08-17T21:16:32.620031863Z" level=info msg="RemoveContainer for \"03d0a332540b7672e64a0f9744c075aead5e1d109897eb06f3708517aebd5083\" returns successfully"
	Aug 17 21:16:32 addons-028423 containerd[743]: time="2023-08-17T21:16:32.625562428Z" level=info msg="RemoveContainer for \"af5aabba4ed46137cecf3b1f64fd643fdcb55aaac29e69acfcdcc5a6eb458305\""
	Aug 17 21:16:32 addons-028423 containerd[743]: time="2023-08-17T21:16:32.631372907Z" level=info msg="RemoveContainer for \"af5aabba4ed46137cecf3b1f64fd643fdcb55aaac29e69acfcdcc5a6eb458305\" returns successfully"
	
	* 
	* ==> coredns [96a1d08e3a5727d7fae18fdad173293c54308ac4f38b6b26bea8dae2dd005b90] <==
	* [INFO] 10.244.0.12:34944 - 50565 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001139386s
	[INFO] 10.244.0.12:34944 - 5915 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004252253s
	[INFO] 10.244.0.12:44868 - 18112 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004346085s
	[INFO] 10.244.0.12:34944 - 422 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061176s
	[INFO] 10.244.0.12:44868 - 56219 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054743s
	[INFO] 10.244.0.16:59044 - 60867 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171566s
	[INFO] 10.244.0.16:59044 - 9935 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000126702s
	[INFO] 10.244.0.16:60938 - 37956 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110866s
	[INFO] 10.244.0.16:60938 - 25674 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000097245s
	[INFO] 10.244.0.16:56408 - 10565 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114033s
	[INFO] 10.244.0.16:56408 - 11449 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087875s
	[INFO] 10.244.0.16:54447 - 24252 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00303638s
	[INFO] 10.244.0.16:54447 - 12991 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003156426s
	[INFO] 10.244.0.16:47379 - 12464 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000112712s
	[INFO] 10.244.0.16:47379 - 29107 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000334058s
	[INFO] 10.244.0.16:41272 - 6098 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000098599s
	[INFO] 10.244.0.16:41272 - 21462 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075667s
	[INFO] 10.244.0.16:37320 - 22494 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000134012s
	[INFO] 10.244.0.16:37320 - 53720 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151161s
	[INFO] 10.244.0.16:58830 - 55077 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106066s
	[INFO] 10.244.0.16:58830 - 59179 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106993s
	[INFO] 10.244.0.16:39113 - 32494 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001535178s
	[INFO] 10.244.0.16:39113 - 58861 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001759191s
	[INFO] 10.244.0.16:33122 - 8364 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000078546s
	[INFO] 10.244.0.16:33122 - 7598 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000061546s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-028423
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-028423
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=addons-028423
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T21_13_03_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-028423
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:12:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-028423
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 21:16:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:16:36 +0000   Thu, 17 Aug 2023 21:12:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:16:36 +0000   Thu, 17 Aug 2023 21:12:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:16:36 +0000   Thu, 17 Aug 2023 21:12:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:16:36 +0000   Thu, 17 Aug 2023 21:13:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-028423
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	System Info:
	  Machine ID:                 39e4180fadc74b3baeca820d202dc246
	  System UUID:                7eaef8df-3ff5-4921-9dea-4c0e195b90cd
	  Boot ID:                    da56fcbe-e8d4-44e4-8927-1925d04822e5
	  Kernel Version:             5.15.0-1041-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-jtcjq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-58478865f7-gcb2k                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  headlamp                    headlamp-5c78f74d8d-trhmd                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 coredns-5d78c9869d-hmv5z                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m22s
	  kube-system                 etcd-addons-028423                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m37s
	  kube-system                 kindnet-sfhzs                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m22s
	  kube-system                 kube-apiserver-addons-028423             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-controller-manager-addons-028423    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-proxy-dx7s9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  kube-system                 kube-scheduler-addons-028423             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  Starting                 3m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m44s (x8 over 3m44s)  kubelet          Node addons-028423 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s (x8 over 3m44s)  kubelet          Node addons-028423 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s (x7 over 3m44s)  kubelet          Node addons-028423 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m36s                  kubelet          Node addons-028423 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m36s                  kubelet          Node addons-028423 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m36s                  kubelet          Node addons-028423 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m36s                  kubelet          Node addons-028423 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m26s                  kubelet          Node addons-028423 status is now: NodeReady
	  Normal  RegisteredNode           3m23s                  node-controller  Node addons-028423 event: Registered Node addons-028423 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug17 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015730] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.269498] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.619452] kauditd_printk_skb: 26 callbacks suppressed
	
	* 
	* ==> etcd [d923da00541d5fa1326ad6a2272e77a3660d39f0506ada8e0a1e4278d25f4753] <==
	* {"level":"info","ts":"2023-08-17T21:12:55.432Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-17T21:12:55.433Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-17T21:12:55.433Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-17T21:12:55.435Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-17T21:12:55.436Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-17T21:12:55.436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-08-17T21:12:55.436Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-08-17T21:12:56.011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-17T21:12:56.012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-17T21:12:56.012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-08-17T21:12:56.012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-08-17T21:12:56.012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-08-17T21:12:56.012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-08-17T21:12:56.012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-08-17T21:12:56.014Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:12:56.015Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-028423 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-17T21:12:56.015Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T21:12:56.022Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:12:56.022Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:12:56.023Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:12:56.023Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T21:12:56.024Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-17T21:12:56.026Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-08-17T21:12:56.027Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-17T21:12:56.072Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [cdf9e954066df6bc177be4fe0c94f2be1b9e25d9679d74e8f7b8ade8ee014639] <==
	* 2023/08/17 21:14:09 GCP Auth Webhook started!
	2023/08/17 21:14:31 Ready to marshal response ...
	2023/08/17 21:14:31 Ready to write response ...
	2023/08/17 21:14:32 Ready to marshal response ...
	2023/08/17 21:14:32 Ready to write response ...
	2023/08/17 21:14:32 Ready to marshal response ...
	2023/08/17 21:14:32 Ready to write response ...
	2023/08/17 21:14:34 Ready to marshal response ...
	2023/08/17 21:14:34 Ready to write response ...
	2023/08/17 21:14:58 Ready to marshal response ...
	2023/08/17 21:14:58 Ready to write response ...
	2023/08/17 21:15:30 Ready to marshal response ...
	2023/08/17 21:15:30 Ready to write response ...
	2023/08/17 21:16:03 Ready to marshal response ...
	2023/08/17 21:16:03 Ready to write response ...
	2023/08/17 21:16:13 Ready to marshal response ...
	2023/08/17 21:16:13 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:16:39 up 58 min,  0 users,  load average: 0.51, 0.70, 0.35
	Linux addons-028423 5.15.0-1041-aws #46~20.04.1-Ubuntu SMP Wed Jul 19 15:39:29 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [081adba6617c3bcef55f4795dafe7d30d1b2198056072051a732d5f21c97d04d] <==
	* I0817 21:14:37.826588       1 main.go:227] handling current node
	I0817 21:14:47.834295       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:14:47.834321       1 main.go:227] handling current node
	I0817 21:14:57.846827       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:14:57.846856       1 main.go:227] handling current node
	I0817 21:15:07.857109       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:15:07.857139       1 main.go:227] handling current node
	I0817 21:15:17.860826       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:15:17.860856       1 main.go:227] handling current node
	I0817 21:15:27.873180       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:15:27.873209       1 main.go:227] handling current node
	I0817 21:15:37.883109       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:15:37.883200       1 main.go:227] handling current node
	I0817 21:15:47.888270       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:15:47.888296       1 main.go:227] handling current node
	I0817 21:15:57.900612       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:15:57.900641       1 main.go:227] handling current node
	I0817 21:16:07.909582       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:16:07.909609       1 main.go:227] handling current node
	I0817 21:16:17.913676       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:16:17.913703       1 main.go:227] handling current node
	I0817 21:16:27.925819       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:16:27.925851       1 main.go:227] handling current node
	I0817 21:16:37.938538       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:16:37.938565       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [64f8f17d1935aebd77bb4c6d5b30ed6273086ac56b1e662f1213248435021b3c] <==
	* I0817 21:15:45.180653       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:15:45.180806       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:15:45.194716       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:15:45.194763       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:15:45.227115       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:15:45.227368       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:15:45.316243       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:15:45.317098       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:15:45.343648       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:15:45.344082       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0817 21:15:46.195919       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0817 21:15:46.344773       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0817 21:15:46.367738       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0817 21:15:56.989917       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0817 21:15:57.005085       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0817 21:15:58.025203       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0817 21:16:03.189447       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0817 21:16:03.613256       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.110.27.246]
	E0817 21:16:11.107071       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0817 21:16:11.107102       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 21:16:11.107142       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 21:16:11.107150       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 21:16:11.134830       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0817 21:16:13.395764       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.104.188.146]
	
	* 
	* ==> kube-controller-manager [d654998b5ea86f7945d6b8bd80e7b4c065100b3d00162b9610462bd5381da43a] <==
	* W0817 21:16:00.961972       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:16:00.962013       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:16:04.639589       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:16:04.639626       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0817 21:16:07.094078       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0817 21:16:07.175579       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:16:07.175613       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0817 21:16:13.179170       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0817 21:16:13.194276       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-jtcjq"
	I0817 21:16:15.516315       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0817 21:16:15.516351       1 shared_informer.go:318] Caches are synced for resource quota
	I0817 21:16:15.950697       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0817 21:16:15.950739       1 shared_informer.go:318] Caches are synced for garbage collector
	W0817 21:16:18.642547       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:16:18.642583       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:16:20.643446       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:16:20.643482       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:16:21.007881       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:16:21.007919       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:16:26.640651       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:16:26.640685       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0817 21:16:30.329503       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0817 21:16:30.339486       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0817 21:16:38.145002       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:16:38.145034       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [761b2f804ee55292e8ad49de162f76ba7cbed669c3f569716faacbda28116d4e] <==
	* I0817 21:13:17.416791       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0817 21:13:17.443135       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0817 21:13:17.443198       1 server_others.go:554] "Using iptables proxy"
	I0817 21:13:17.482936       1 server_others.go:192] "Using iptables Proxier"
	I0817 21:13:17.482974       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0817 21:13:17.482983       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0817 21:13:17.482994       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0817 21:13:17.483056       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 21:13:17.483557       1 server.go:658] "Version info" version="v1.27.4"
	I0817 21:13:17.483569       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 21:13:17.487094       1 config.go:188] "Starting service config controller"
	I0817 21:13:17.487109       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 21:13:17.487132       1 config.go:97] "Starting endpoint slice config controller"
	I0817 21:13:17.487135       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 21:13:17.491886       1 config.go:315] "Starting node config controller"
	I0817 21:13:17.491925       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 21:13:17.587404       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0817 21:13:17.587452       1 shared_informer.go:318] Caches are synced for service config
	I0817 21:13:17.593352       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [550b99e448b0d9fb117b2f114c4e25641430606702fdfcc3087e48b26dc65aa6] <==
	* W0817 21:12:59.151356       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 21:12:59.151834       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0817 21:12:59.151398       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 21:12:59.151854       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0817 21:12:59.151463       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 21:12:59.151870       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0817 21:13:00.005939       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 21:13:00.005980       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0817 21:13:00.070305       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 21:13:00.070403       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0817 21:13:00.118994       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 21:13:00.119036       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0817 21:13:00.119086       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0817 21:13:00.119097       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0817 21:13:00.120945       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 21:13:00.120975       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0817 21:13:00.126481       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 21:13:00.126517       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0817 21:13:00.249001       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 21:13:00.249053       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0817 21:13:00.343171       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 21:13:00.343365       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 21:13:00.371019       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 21:13:00.371254       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0817 21:13:02.812188       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 17 21:16:30 addons-028423 kubelet[1359]: E0817 21:16:30.361805    1359 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-nvt22.177c483dd2a7d225", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-nvt22", UID:"11801127-8bba-4e45-905e-9a27ff1d4236", APIVersion:"v1", ResourceVersion:"651", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Compon
ent:"kubelet", Host:"addons-028423"}, FirstTimestamp:time.Date(2023, time.August, 17, 21, 16, 30, 348882469, time.Local), LastTimestamp:time.Date(2023, time.August, 17, 21, 16, 30, 348882469, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-nvt22.177c483dd2a7d225" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 17 21:16:30 addons-028423 kubelet[1359]: I0817 21:16:30.527476    1359 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=56f4067c-f20e-4a9d-b5ff-6964927bdd7b path="/var/lib/kubelet/pods/56f4067c-f20e-4a9d-b5ff-6964927bdd7b/volumes"
	Aug 17 21:16:30 addons-028423 kubelet[1359]: I0817 21:16:30.527854    1359 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=c87a0d3c-c59c-4167-9c63-c8ae229fff97 path="/var/lib/kubelet/pods/c87a0d3c-c59c-4167-9c63-c8ae229fff97/volumes"
	Aug 17 21:16:30 addons-028423 kubelet[1359]: I0817 21:16:30.528254    1359 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=faa61c23-febd-4efd-bd4a-714abcd9acd9 path="/var/lib/kubelet/pods/faa61c23-febd-4efd-bd4a-714abcd9acd9/volumes"
	Aug 17 21:16:30 addons-028423 kubelet[1359]: I0817 21:16:30.570559    1359 scope.go:115] "RemoveContainer" containerID="c2e0cbc2dfc0b3a09015b2e78e82184a48c575d624287d5e14b64e8205cec0e9"
	Aug 17 21:16:30 addons-028423 kubelet[1359]: I0817 21:16:30.571079    1359 scope.go:115] "RemoveContainer" containerID="d64a9e54fd1f7142b241554cdec872227dffc4ef25337bdcd9ea17aca2bfcc18"
	Aug 17 21:16:30 addons-028423 kubelet[1359]: E0817 21:16:30.571365    1359 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-jtcjq_default(744d16d2-ed23-465a-a1ca-9b02ca8cbac3)\"" pod="default/hello-world-app-65bdb79f98-jtcjq" podUID=744d16d2-ed23-465a-a1ca-9b02ca8cbac3
	Aug 17 21:16:31 addons-028423 kubelet[1359]: I0817 21:16:31.605421    1359 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7de270e1819305d8f411986d56ba25c229aa1b8fe1e8f0c5d166b648d816028d"
	Aug 17 21:16:31 addons-028423 kubelet[1359]: I0817 21:16:31.728463    1359 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmcgl\" (UniqueName: \"kubernetes.io/projected/cd975e55-332b-4a73-a6cf-587df43db3a2-kube-api-access-zmcgl\") pod \"cd975e55-332b-4a73-a6cf-587df43db3a2\" (UID: \"cd975e55-332b-4a73-a6cf-587df43db3a2\") "
	Aug 17 21:16:31 addons-028423 kubelet[1359]: I0817 21:16:31.730560    1359 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd975e55-332b-4a73-a6cf-587df43db3a2-kube-api-access-zmcgl" (OuterVolumeSpecName: "kube-api-access-zmcgl") pod "cd975e55-332b-4a73-a6cf-587df43db3a2" (UID: "cd975e55-332b-4a73-a6cf-587df43db3a2"). InnerVolumeSpecName "kube-api-access-zmcgl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 17 21:16:31 addons-028423 kubelet[1359]: I0817 21:16:31.829039    1359 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/11801127-8bba-4e45-905e-9a27ff1d4236-webhook-cert\") pod \"11801127-8bba-4e45-905e-9a27ff1d4236\" (UID: \"11801127-8bba-4e45-905e-9a27ff1d4236\") "
	Aug 17 21:16:31 addons-028423 kubelet[1359]: I0817 21:16:31.829098    1359 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7flb\" (UniqueName: \"kubernetes.io/projected/11801127-8bba-4e45-905e-9a27ff1d4236-kube-api-access-r7flb\") pod \"11801127-8bba-4e45-905e-9a27ff1d4236\" (UID: \"11801127-8bba-4e45-905e-9a27ff1d4236\") "
	Aug 17 21:16:31 addons-028423 kubelet[1359]: I0817 21:16:31.829126    1359 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwqj5\" (UniqueName: \"kubernetes.io/projected/ca601be3-5325-4d4e-9c0c-84985c85a22f-kube-api-access-bwqj5\") pod \"ca601be3-5325-4d4e-9c0c-84985c85a22f\" (UID: \"ca601be3-5325-4d4e-9c0c-84985c85a22f\") "
	Aug 17 21:16:31 addons-028423 kubelet[1359]: I0817 21:16:31.829184    1359 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zmcgl\" (UniqueName: \"kubernetes.io/projected/cd975e55-332b-4a73-a6cf-587df43db3a2-kube-api-access-zmcgl\") on node \"addons-028423\" DevicePath \"\""
	Aug 17 21:16:31 addons-028423 kubelet[1359]: I0817 21:16:31.831407    1359 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca601be3-5325-4d4e-9c0c-84985c85a22f-kube-api-access-bwqj5" (OuterVolumeSpecName: "kube-api-access-bwqj5") pod "ca601be3-5325-4d4e-9c0c-84985c85a22f" (UID: "ca601be3-5325-4d4e-9c0c-84985c85a22f"). InnerVolumeSpecName "kube-api-access-bwqj5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 17 21:16:31 addons-028423 kubelet[1359]: I0817 21:16:31.831677    1359 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11801127-8bba-4e45-905e-9a27ff1d4236-kube-api-access-r7flb" (OuterVolumeSpecName: "kube-api-access-r7flb") pod "11801127-8bba-4e45-905e-9a27ff1d4236" (UID: "11801127-8bba-4e45-905e-9a27ff1d4236"). InnerVolumeSpecName "kube-api-access-r7flb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 17 21:16:31 addons-028423 kubelet[1359]: I0817 21:16:31.833097    1359 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11801127-8bba-4e45-905e-9a27ff1d4236-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "11801127-8bba-4e45-905e-9a27ff1d4236" (UID: "11801127-8bba-4e45-905e-9a27ff1d4236"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 17 21:16:31 addons-028423 kubelet[1359]: I0817 21:16:31.929712    1359 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/11801127-8bba-4e45-905e-9a27ff1d4236-webhook-cert\") on node \"addons-028423\" DevicePath \"\""
	Aug 17 21:16:31 addons-028423 kubelet[1359]: I0817 21:16:31.929751    1359 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r7flb\" (UniqueName: \"kubernetes.io/projected/11801127-8bba-4e45-905e-9a27ff1d4236-kube-api-access-r7flb\") on node \"addons-028423\" DevicePath \"\""
	Aug 17 21:16:31 addons-028423 kubelet[1359]: I0817 21:16:31.929765    1359 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bwqj5\" (UniqueName: \"kubernetes.io/projected/ca601be3-5325-4d4e-9c0c-84985c85a22f-kube-api-access-bwqj5\") on node \"addons-028423\" DevicePath \"\""
	Aug 17 21:16:32 addons-028423 kubelet[1359]: I0817 21:16:32.526481    1359 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=11801127-8bba-4e45-905e-9a27ff1d4236 path="/var/lib/kubelet/pods/11801127-8bba-4e45-905e-9a27ff1d4236/volumes"
	Aug 17 21:16:32 addons-028423 kubelet[1359]: I0817 21:16:32.609178    1359 scope.go:115] "RemoveContainer" containerID="03d0a332540b7672e64a0f9744c075aead5e1d109897eb06f3708517aebd5083"
	Aug 17 21:16:32 addons-028423 kubelet[1359]: I0817 21:16:32.621293    1359 scope.go:115] "RemoveContainer" containerID="af5aabba4ed46137cecf3b1f64fd643fdcb55aaac29e69acfcdcc5a6eb458305"
	Aug 17 21:16:34 addons-028423 kubelet[1359]: I0817 21:16:34.527037    1359 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=ca601be3-5325-4d4e-9c0c-84985c85a22f path="/var/lib/kubelet/pods/ca601be3-5325-4d4e-9c0c-84985c85a22f/volumes"
	Aug 17 21:16:34 addons-028423 kubelet[1359]: I0817 21:16:34.527405    1359 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=cd975e55-332b-4a73-a6cf-587df43db3a2 path="/var/lib/kubelet/pods/cd975e55-332b-4a73-a6cf-587df43db3a2/volumes"
	
	* 
	* ==> storage-provisioner [4c8892487f29b0f6ea17b63fb112eeb2d13ebb725091196b98b5cbbeda9231dc] <==
	* I0817 21:13:20.468162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 21:13:20.497434       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 21:13:20.497526       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 21:13:20.507975       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 21:13:20.508877       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-028423_3b70992e-8f81-4f1b-9c2d-a85b738c6d31!
	I0817 21:13:20.514799       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e35e8b3b-2c66-4dee-bf31-01b2eb5c9233", APIVersion:"v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-028423_3b70992e-8f81-4f1b-9c2d-a85b738c6d31 became leader
	I0817 21:13:20.609517       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-028423_3b70992e-8f81-4f1b-9c2d-a85b738c6d31!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-028423 -n addons-028423
helpers_test.go:261: (dbg) Run:  kubectl --context addons-028423 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (37.86s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-545557 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-545557 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (26.884772841s)

                                                
                                                
-- stdout --
	* [functional-545557] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node functional-545557 in cluster functional-545557
	* Pulling base image ...
	* Updating the running docker "functional-545557" container ...
	* Preparing Kubernetes v1.27.4 on containerd 1.6.21 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 21:20:45.060684   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5d78c9869d-gvljn" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-gvljn": dial tcp 192.168.49.2:8441: connect: connection refused
	E0817 21:20:45.060906   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	E0817 21:20:45.061075   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	E0817 21:20:45.061240   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	E0817 21:20:45.061460   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-rprkp" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rprkp": dial tcp 192.168.49.2:8441: connect: connection refused
	E0817 21:20:45.061644   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	E0817 21:20:45.079369   32705 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8441: connect: connection refused]
	E0817 21:20:45.319035   32705 start.go:866] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IPX Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-545557": Get "https://192.168.49.2:8441/api/v1/nodes/functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-545557 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 26.8849933s for "functional-545557" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-545557
helpers_test.go:235: (dbg) docker inspect functional-545557:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c06a3ae11e991768f372447db4fb552fcac48ff2dc018a5081930d69e7a3f72c",
	        "Created": "2023-08-17T21:18:52.25158104Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 28949,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-17T21:18:52.582984646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/c06a3ae11e991768f372447db4fb552fcac48ff2dc018a5081930d69e7a3f72c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c06a3ae11e991768f372447db4fb552fcac48ff2dc018a5081930d69e7a3f72c/hostname",
	        "HostsPath": "/var/lib/docker/containers/c06a3ae11e991768f372447db4fb552fcac48ff2dc018a5081930d69e7a3f72c/hosts",
	        "LogPath": "/var/lib/docker/containers/c06a3ae11e991768f372447db4fb552fcac48ff2dc018a5081930d69e7a3f72c/c06a3ae11e991768f372447db4fb552fcac48ff2dc018a5081930d69e7a3f72c-json.log",
	        "Name": "/functional-545557",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-545557:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-545557",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e498af663c4d416169d5e6a62154f3e2381400cdb280a92b111990dd0a57e506-init/diff:/var/lib/docker/overlay2/6e6597fd944d5f98ecbe7d9c5301a949ba6526f8982591cdfcbe3d11f113be4a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e498af663c4d416169d5e6a62154f3e2381400cdb280a92b111990dd0a57e506/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e498af663c4d416169d5e6a62154f3e2381400cdb280a92b111990dd0a57e506/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e498af663c4d416169d5e6a62154f3e2381400cdb280a92b111990dd0a57e506/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-545557",
	                "Source": "/var/lib/docker/volumes/functional-545557/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-545557",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-545557",
	                "name.minikube.sigs.k8s.io": "functional-545557",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c939595be956f9545700a7ca6f3b0ab775a2af0eb8f59fce848e9993634c2060",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c939595be956",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-545557": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c06a3ae11e99",
	                        "functional-545557"
	                    ],
	                    "NetworkID": "dd61593c3ab6fad02d296091bc6a0eb0155487ace79ece538735f5614eb170a3",
	                    "EndpointID": "d38046714a1dc43438efa47f8e9a7688ebff5054687ec97329b702a516ffd5f3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-545557 -n functional-545557
E0817 21:20:46.583808    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
helpers_test.go:239: (dbg) Done: out/minikube-linux-arm64 status --format={{.Host}} -p functional-545557 -n functional-545557: (3.747789444s)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-545557 logs -n 25: (1.691863947s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-112266 --log_dir                                                  | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	|         | /tmp/nospam-112266 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-112266 --log_dir                                                  | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	|         | /tmp/nospam-112266 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-112266 --log_dir                                                  | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	|         | /tmp/nospam-112266 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-112266 --log_dir                                                  | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	|         | /tmp/nospam-112266 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-112266 --log_dir                                                  | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	|         | /tmp/nospam-112266 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-112266 --log_dir                                                  | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	|         | /tmp/nospam-112266 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-112266                                                         | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	| start   | -p functional-545557                                                     | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:19 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start   | -p functional-545557                                                     | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:20 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-545557 cache add                                              | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-545557 cache add                                              | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-545557 cache add                                              | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-545557 cache add                                              | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | minikube-local-cache-test:functional-545557                              |                   |         |         |                     |                     |
	| cache   | functional-545557 cache delete                                           | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | minikube-local-cache-test:functional-545557                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	| ssh     | functional-545557 ssh sudo                                               | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-545557                                                        | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-545557 ssh                                                    | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-545557 cache reload                                           | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	| ssh     | functional-545557 ssh                                                    | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-545557 kubectl --                                             | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | --context functional-545557                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-545557                                                     | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:20:18
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:20:18.497074   32705 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:20:18.497250   32705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:20:18.497254   32705 out.go:309] Setting ErrFile to fd 2...
	I0817 21:20:18.497259   32705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:20:18.497501   32705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	I0817 21:20:18.497839   32705 out.go:303] Setting JSON to false
	I0817 21:20:18.498735   32705 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3757,"bootTime":1692303461,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0817 21:20:18.498788   32705 start.go:138] virtualization:  
	I0817 21:20:18.501773   32705 out.go:177] * [functional-545557] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0817 21:20:18.503863   32705 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:20:18.505863   32705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:20:18.503946   32705 notify.go:220] Checking for updates...
	I0817 21:20:18.509867   32705 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:20:18.512331   32705 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	I0817 21:20:18.514728   32705 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 21:20:18.516773   32705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:20:18.519299   32705 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:20:18.519383   32705 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:20:18.545164   32705 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:20:18.545273   32705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:20:18.654557   32705 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-08-17 21:20:18.644221049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:20:18.654709   32705 docker.go:294] overlay module found
	I0817 21:20:18.656853   32705 out.go:177] * Using the docker driver based on existing profile
	I0817 21:20:18.658784   32705 start.go:298] selected driver: docker
	I0817 21:20:18.658791   32705 start.go:902] validating driver "docker" against &{Name:functional-545557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-545557 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:20:18.658883   32705 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:20:18.658975   32705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:20:18.734308   32705 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-08-17 21:20:18.724813076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:20:18.734784   32705 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 21:20:18.734822   32705 cni.go:84] Creating CNI manager for ""
	I0817 21:20:18.734828   32705 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:20:18.734838   32705 start_flags.go:319] config:
	{Name:functional-545557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-545557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:20:18.737393   32705 out.go:177] * Starting control plane node functional-545557 in cluster functional-545557
	I0817 21:20:18.739324   32705 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0817 21:20:18.741371   32705 out.go:177] * Pulling base image ...
	I0817 21:20:18.743469   32705 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime containerd
	I0817 21:20:18.743519   32705 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4
	I0817 21:20:18.743526   32705 cache.go:57] Caching tarball of preloaded images
	I0817 21:20:18.743546   32705 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:20:18.743603   32705 preload.go:174] Found /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 21:20:18.743611   32705 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on containerd
	I0817 21:20:18.743732   32705 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/config.json ...
	I0817 21:20:18.763701   32705 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0817 21:20:18.763717   32705 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0817 21:20:18.763738   32705 cache.go:195] Successfully downloaded all kic artifacts
	I0817 21:20:18.763772   32705 start.go:365] acquiring machines lock for functional-545557: {Name:mk992cc35a6d639d46bb46397ccce45c20f3d9da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:20:18.763839   32705 start.go:369] acquired machines lock for "functional-545557" in 49.944µs
	I0817 21:20:18.763858   32705 start.go:96] Skipping create...Using existing machine configuration
	I0817 21:20:18.763862   32705 fix.go:54] fixHost starting: 
	I0817 21:20:18.764122   32705 cli_runner.go:164] Run: docker container inspect functional-545557 --format={{.State.Status}}
	I0817 21:20:18.781969   32705 fix.go:102] recreateIfNeeded on functional-545557: state=Running err=<nil>
	W0817 21:20:18.781996   32705 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 21:20:18.784354   32705 out.go:177] * Updating the running docker "functional-545557" container ...
	I0817 21:20:18.786342   32705 machine.go:88] provisioning docker machine ...
	I0817 21:20:18.786397   32705 ubuntu.go:169] provisioning hostname "functional-545557"
	I0817 21:20:18.786468   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:18.808918   32705 main.go:141] libmachine: Using SSH client type: native
	I0817 21:20:18.809396   32705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0817 21:20:18.809410   32705 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-545557 && echo "functional-545557" | sudo tee /etc/hostname
	I0817 21:20:18.953042   32705 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-545557
	
	I0817 21:20:18.953104   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:18.972530   32705 main.go:141] libmachine: Using SSH client type: native
	I0817 21:20:18.972943   32705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0817 21:20:18.972959   32705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-545557' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-545557/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-545557' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:20:19.103678   32705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:20:19.103693   32705 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-2431/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-2431/.minikube}
	I0817 21:20:19.103719   32705 ubuntu.go:177] setting up certificates
	I0817 21:20:19.103727   32705 provision.go:83] configureAuth start
	I0817 21:20:19.103781   32705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-545557
	I0817 21:20:19.125560   32705 provision.go:138] copyHostCerts
	I0817 21:20:19.125624   32705 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem, removing ...
	I0817 21:20:19.125651   32705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem
	I0817 21:20:19.125732   32705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem (1078 bytes)
	I0817 21:20:19.125819   32705 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem, removing ...
	I0817 21:20:19.125823   32705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem
	I0817 21:20:19.125847   32705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem (1123 bytes)
	I0817 21:20:19.125896   32705 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem, removing ...
	I0817 21:20:19.125899   32705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem
	I0817 21:20:19.125920   32705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem (1675 bytes)
	I0817 21:20:19.125961   32705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem org=jenkins.functional-545557 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-545557]
	I0817 21:20:20.395343   32705 provision.go:172] copyRemoteCerts
	I0817 21:20:20.395397   32705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:20:20.395433   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:20.413864   32705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
	I0817 21:20:20.509267   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:20:20.537755   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0817 21:20:20.566049   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 21:20:20.593660   32705 provision.go:86] duration metric: configureAuth took 1.489920942s
	I0817 21:20:20.593675   32705 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:20:20.593858   32705 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:20:20.593863   32705 machine.go:91] provisioned docker machine in 1.807515988s
	I0817 21:20:20.593869   32705 start.go:300] post-start starting for "functional-545557" (driver="docker")
	I0817 21:20:20.593877   32705 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:20:20.593921   32705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:20:20.593959   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:20.611775   32705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
	I0817 21:20:20.704980   32705 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:20:20.708983   32705 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:20:20.709007   32705 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:20:20.709016   32705 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:20:20.709021   32705 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0817 21:20:20.709029   32705 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/addons for local assets ...
	I0817 21:20:20.709082   32705 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/files for local assets ...
	I0817 21:20:20.709156   32705 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem -> 77452.pem in /etc/ssl/certs
	I0817 21:20:20.709241   32705 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/test/nested/copy/7745/hosts -> hosts in /etc/test/nested/copy/7745
	I0817 21:20:20.709280   32705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/7745
	I0817 21:20:20.719408   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem --> /etc/ssl/certs/77452.pem (1708 bytes)
	I0817 21:20:20.748127   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/test/nested/copy/7745/hosts --> /etc/test/nested/copy/7745/hosts (40 bytes)
	I0817 21:20:20.776565   32705 start.go:303] post-start completed in 182.681899ms
	I0817 21:20:20.776633   32705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:20:20.776670   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:20.795644   32705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
	I0817 21:20:20.884792   32705 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:20:20.890529   32705 fix.go:56] fixHost completed within 2.126658749s
	I0817 21:20:20.890542   32705 start.go:83] releasing machines lock for "functional-545557", held for 2.126696311s
	I0817 21:20:20.890607   32705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-545557
	I0817 21:20:20.907960   32705 ssh_runner.go:195] Run: cat /version.json
	I0817 21:20:20.908002   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:20.908223   32705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:20:20.908257   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:20.941480   32705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
	I0817 21:20:20.952252   32705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
	I0817 21:20:21.039296   32705 ssh_runner.go:195] Run: systemctl --version
	I0817 21:20:21.178564   32705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:20:21.184499   32705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0817 21:20:21.206761   32705 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0817 21:20:21.206827   32705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:20:21.218211   32705 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0817 21:20:21.218225   32705 start.go:466] detecting cgroup driver to use...
	I0817 21:20:21.218256   32705 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0817 21:20:21.218306   32705 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0817 21:20:21.233098   32705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0817 21:20:21.247563   32705 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:20:21.247619   32705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:20:21.263491   32705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:20:21.277249   32705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:20:21.402193   32705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:20:21.524820   32705 docker.go:212] disabling docker service ...
	I0817 21:20:21.524875   32705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:20:21.540617   32705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:20:21.554596   32705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:20:21.675588   32705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:20:21.790508   32705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:20:21.805096   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:20:21.826081   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0817 21:20:21.838592   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0817 21:20:21.851018   32705 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0817 21:20:21.851073   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0817 21:20:21.863903   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:20:21.876311   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0817 21:20:21.888814   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:20:21.901691   32705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:20:21.915079   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0817 21:20:21.928067   32705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:20:21.938349   32705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:20:21.948906   32705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:20:22.069483   32705 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0817 21:20:22.176773   32705 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 21:20:22.176838   32705 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0817 21:20:22.181895   32705 start.go:534] Will wait 60s for crictl version
	I0817 21:20:22.181953   32705 ssh_runner.go:195] Run: which crictl
	I0817 21:20:22.186576   32705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:20:22.235604   32705 retry.go:31] will retry after 10.159626752s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:20:22Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 21:20:32.397411   32705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:20:32.440762   32705 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0817 21:20:32.440819   32705 ssh_runner.go:195] Run: containerd --version
	I0817 21:20:32.470084   32705 ssh_runner.go:195] Run: containerd --version
	I0817 21:20:32.511198   32705 out.go:177] * Preparing Kubernetes v1.27.4 on containerd 1.6.21 ...
	I0817 21:20:32.516945   32705 cli_runner.go:164] Run: docker network inspect functional-545557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:20:32.534170   32705 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 21:20:32.541262   32705 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0817 21:20:32.543193   32705 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime containerd
	I0817 21:20:32.543274   32705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:20:32.584068   32705 containerd.go:604] all images are preloaded for containerd runtime.
	I0817 21:20:32.584078   32705 containerd.go:518] Images already preloaded, skipping extraction
	I0817 21:20:32.584133   32705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:20:32.624207   32705 containerd.go:604] all images are preloaded for containerd runtime.
	I0817 21:20:32.624217   32705 cache_images.go:84] Images are preloaded, skipping loading
	I0817 21:20:32.624281   32705 ssh_runner.go:195] Run: sudo crictl info
	I0817 21:20:32.668222   32705 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0817 21:20:32.668245   32705 cni.go:84] Creating CNI manager for ""
	I0817 21:20:32.668252   32705 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:20:32.668261   32705 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:20:32.668278   32705 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-545557 NodeName:functional-545557 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 21:20:32.668402   32705 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-545557"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:20:32.668466   32705 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-545557 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:functional-545557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0817 21:20:32.668528   32705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 21:20:32.679425   32705 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:20:32.679487   32705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 21:20:32.690090   32705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0817 21:20:32.712015   32705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 21:20:32.733261   32705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1956 bytes)
	I0817 21:20:32.754895   32705 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 21:20:32.759442   32705 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557 for IP: 192.168.49.2
	I0817 21:20:32.759462   32705 certs.go:190] acquiring lock for shared ca certs: {Name:mk058988a603cd06c6d056488c4bdaf60bd886a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:32.759587   32705 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.key
	I0817 21:20:32.759637   32705 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.key
	I0817 21:20:32.759706   32705 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.key
	I0817 21:20:32.759754   32705 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/apiserver.key.dd3b5fb2
	I0817 21:20:32.759793   32705 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/proxy-client.key
	I0817 21:20:32.759919   32705 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/7745.pem (1338 bytes)
	W0817 21:20:32.759946   32705 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/7745_empty.pem, impossibly tiny 0 bytes
	I0817 21:20:32.759953   32705 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem (1675 bytes)
	I0817 21:20:32.759976   32705 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem (1078 bytes)
	I0817 21:20:32.759999   32705 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:20:32.760020   32705 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem (1675 bytes)
	I0817 21:20:32.760061   32705 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem (1708 bytes)
	I0817 21:20:32.760709   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 21:20:32.792401   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 21:20:32.822206   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 21:20:32.852200   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 21:20:32.881600   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:20:32.912298   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 21:20:32.948085   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:20:32.976604   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 21:20:33.005865   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:20:33.039294   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/certs/7745.pem --> /usr/share/ca-certificates/7745.pem (1338 bytes)
	I0817 21:20:33.070087   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem --> /usr/share/ca-certificates/77452.pem (1708 bytes)
	I0817 21:20:33.100415   32705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 21:20:33.123730   32705 ssh_runner.go:195] Run: openssl version
	I0817 21:20:33.130979   32705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:20:33.143031   32705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:20:33.147546   32705 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:12 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:20:33.147602   32705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:20:33.156425   32705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:20:33.167573   32705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7745.pem && ln -fs /usr/share/ca-certificates/7745.pem /etc/ssl/certs/7745.pem"
	I0817 21:20:33.179489   32705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7745.pem
	I0817 21:20:33.183984   32705 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:18 /usr/share/ca-certificates/7745.pem
	I0817 21:20:33.184046   32705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7745.pem
	I0817 21:20:33.192703   32705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7745.pem /etc/ssl/certs/51391683.0"
	I0817 21:20:33.203894   32705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77452.pem && ln -fs /usr/share/ca-certificates/77452.pem /etc/ssl/certs/77452.pem"
	I0817 21:20:33.217449   32705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77452.pem
	I0817 21:20:33.223740   32705 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:18 /usr/share/ca-certificates/77452.pem
	I0817 21:20:33.223798   32705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77452.pem
	I0817 21:20:33.232590   32705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77452.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 21:20:33.243980   32705 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:20:33.248341   32705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 21:20:33.256551   32705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 21:20:33.265049   32705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 21:20:33.273641   32705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 21:20:33.282239   32705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 21:20:33.290604   32705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 21:20:33.298952   32705 kubeadm.go:404] StartCluster: {Name:functional-545557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-545557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:20:33.299042   32705 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 21:20:33.299097   32705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:20:33.339893   32705 cri.go:89] found id: "7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467"
	I0817 21:20:33.339904   32705 cri.go:89] found id: "b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e"
	I0817 21:20:33.339908   32705 cri.go:89] found id: "63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef"
	I0817 21:20:33.339912   32705 cri.go:89] found id: "dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f"
	I0817 21:20:33.339915   32705 cri.go:89] found id: "2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70"
	I0817 21:20:33.339919   32705 cri.go:89] found id: "32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279"
	I0817 21:20:33.339922   32705 cri.go:89] found id: "e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a"
	I0817 21:20:33.339925   32705 cri.go:89] found id: "bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745"
	I0817 21:20:33.339928   32705 cri.go:89] found id: "1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98"
	I0817 21:20:33.339934   32705 cri.go:89] found id: ""
	I0817 21:20:33.339992   32705 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 21:20:33.373084   32705 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98","pid":1260,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98/rootfs","created":"2023-08-17T21:19:10.763298615Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.27.4","io.kubernetes.cri.sandbox-id":"da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b136726eb5b054d3573cf1ee701d51d8"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70","pid":1791,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70/rootfs","created":"2023-08-17T21:19:32.677621896Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.27.4","io.kubernetes.cri.sandbox-id":"f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834b3930e2a3a830","io.kubernetes.cri.sandbox-name":"kube-proxy-rprkp","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c44cf9c6-7dd1-4380-a813-d1f4a1b9a298"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06","pid":1121,"status":"running","bundl
e":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06/rootfs","created":"2023-08-17T21:19:10.53042668Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-545557_3a287b5aac4530715622a5e33a1287a5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3a287b5aac4530715622a5e33a1287a5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"32defc4cba5123b96a7769d6baac655e3eef771
543100a5600f2c0cf46286279","pid":1329,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279/rootfs","created":"2023-08-17T21:19:10.899584799Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.27.4","io.kubernetes.cri.sandbox-id":"6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"31484f02d94a16c70ea16af89113e3b3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934","pid":1709,"status":"running","bundle":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934/rootfs","created":"2023-08-17T21:19:32.529879947Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9px5c_7771d523-4a98-4e62-9edb-74b5468f95bf","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-9px5c","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7771d523-4a98-4e62-9edb-74b5468f95bf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637","pid":11
97,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637/rootfs","created":"2023-08-17T21:19:10.642442048Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-545557_31484f02d94a16c70ea16af89113e3b3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"31484f02d94a16c70ea16af89113e3b3"},"owner":"root"},{"ociVersion":"1.0.2-d
ev","id":"7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467","pid":2463,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467/rootfs","created":"2023-08-17T21:20:03.21292477Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"aadbab333460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c2ba22e9-5bd3-4857-869e-b25ea0e1a08d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819","pid":1183,"status":"runn
ing","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819/rootfs","created":"2023-08-17T21:19:10.616796428Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-545557_a00e55d2d779747966fc8928426ad862","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a00e55d2d779747966fc8928426ad862"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aadbab3
33460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5","pid":1895,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aadbab333460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aadbab333460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5/rootfs","created":"2023-08-17T21:19:32.890552916Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"aadbab333460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c2ba22e9-5bd3-4857-869e-b25ea0e1a08d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c2ba22e9-5bd3-4857-869e-b25ea0
e1a08d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e","pid":2141,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e/rootfs","created":"2023-08-17T21:19:46.160800001Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"cf61adea89cb901ccb37d90a842d8e03a0716bce299a59f17ed2b9bcf305969d","io.kubernetes.cri.sandbox-name":"coredns-5d78c9869d-gvljn","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0261b057-521f-49ff-8046-9c5967ae60f6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b0
9f2c0af2745","pid":1249,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745/rootfs","created":"2023-08-17T21:19:10.7314897Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.7-0","io.kubernetes.cri.sandbox-id":"2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06","io.kubernetes.cri.sandbox-name":"etcd-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3a287b5aac4530715622a5e33a1287a5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cf61adea89cb901ccb37d90a842d8e03a0716bce299a59f17ed2b9bcf305969d","pid":2111,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf61adea89cb901ccb37d90a84
2d8e03a0716bce299a59f17ed2b9bcf305969d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf61adea89cb901ccb37d90a842d8e03a0716bce299a59f17ed2b9bcf305969d/rootfs","created":"2023-08-17T21:19:46.058845639Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"cf61adea89cb901ccb37d90a842d8e03a0716bce299a59f17ed2b9bcf305969d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5d78c9869d-gvljn_0261b057-521f-49ff-8046-9c5967ae60f6","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5d78c9869d-gvljn","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0261b057-521f-49ff-8046-9c5967ae60f6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823","pid":1148,"status":"running","bund
le":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823/rootfs","created":"2023-08-17T21:19:10.559179704Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-545557_b136726eb5b054d3573cf1ee701d51d8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b136726eb5b054d3573cf1ee701d51d8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":
"dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f","pid":1806,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f/rootfs","created":"2023-08-17T21:19:32.801212787Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20230511-dc714da8","io.kubernetes.cri.sandbox-id":"3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934","io.kubernetes.cri.sandbox-name":"kindnet-9px5c","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7771d523-4a98-4e62-9edb-74b5468f95bf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a","pid":1319,"status":"running","bundle":"/run
/containerd/io.containerd.runtime.v2.task/k8s.io/e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a/rootfs","created":"2023-08-17T21:19:10.936921942Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.27.4","io.kubernetes.cri.sandbox-id":"a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a00e55d2d779747966fc8928426ad862"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834b3930e2a3a830","pid":1716,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834
b3930e2a3a830","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834b3930e2a3a830/rootfs","created":"2023-08-17T21:19:32.541884769Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834b3930e2a3a830","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-rprkp_c44cf9c6-7dd1-4380-a813-d1f4a1b9a298","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-rprkp","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c44cf9c6-7dd1-4380-a813-d1f4a1b9a298"},"owner":"root"}]
	I0817 21:20:33.373352   32705 cri.go:126] list returned 16 containers
	I0817 21:20:33.373360   32705 cri.go:129] container: {ID:1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98 Status:running}
	I0817 21:20:33.373374   32705 cri.go:135] skipping {1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98 running}: state = "running", want "paused"
	I0817 21:20:33.373382   32705 cri.go:129] container: {ID:2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70 Status:running}
	I0817 21:20:33.373388   32705 cri.go:135] skipping {2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70 running}: state = "running", want "paused"
	I0817 21:20:33.373393   32705 cri.go:129] container: {ID:2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06 Status:running}
	I0817 21:20:33.373399   32705 cri.go:131] skipping 2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06 - not in ps
	I0817 21:20:33.373403   32705 cri.go:129] container: {ID:32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 Status:running}
	I0817 21:20:33.373409   32705 cri.go:135] skipping {32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 running}: state = "running", want "paused"
	I0817 21:20:33.373414   32705 cri.go:129] container: {ID:3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934 Status:running}
	I0817 21:20:33.373420   32705 cri.go:131] skipping 3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934 - not in ps
	I0817 21:20:33.373424   32705 cri.go:129] container: {ID:6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637 Status:running}
	I0817 21:20:33.373430   32705 cri.go:131] skipping 6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637 - not in ps
	I0817 21:20:33.373434   32705 cri.go:129] container: {ID:7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467 Status:running}
	I0817 21:20:33.373439   32705 cri.go:135] skipping {7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467 running}: state = "running", want "paused"
	I0817 21:20:33.373444   32705 cri.go:129] container: {ID:a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819 Status:running}
	I0817 21:20:33.373450   32705 cri.go:131] skipping a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819 - not in ps
	I0817 21:20:33.373454   32705 cri.go:129] container: {ID:aadbab333460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5 Status:running}
	I0817 21:20:33.373459   32705 cri.go:131] skipping aadbab333460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5 - not in ps
	I0817 21:20:33.373463   32705 cri.go:129] container: {ID:b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e Status:running}
	I0817 21:20:33.373471   32705 cri.go:135] skipping {b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e running}: state = "running", want "paused"
	I0817 21:20:33.373477   32705 cri.go:129] container: {ID:bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745 Status:running}
	I0817 21:20:33.373482   32705 cri.go:135] skipping {bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745 running}: state = "running", want "paused"
	I0817 21:20:33.373487   32705 cri.go:129] container: {ID:cf61adea89cb901ccb37d90a842d8e03a0716bce299a59f17ed2b9bcf305969d Status:running}
	I0817 21:20:33.373492   32705 cri.go:131] skipping cf61adea89cb901ccb37d90a842d8e03a0716bce299a59f17ed2b9bcf305969d - not in ps
	I0817 21:20:33.373496   32705 cri.go:129] container: {ID:da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823 Status:running}
	I0817 21:20:33.373502   32705 cri.go:131] skipping da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823 - not in ps
	I0817 21:20:33.373506   32705 cri.go:129] container: {ID:dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f Status:running}
	I0817 21:20:33.373512   32705 cri.go:135] skipping {dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f running}: state = "running", want "paused"
	I0817 21:20:33.373517   32705 cri.go:129] container: {ID:e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a Status:running}
	I0817 21:20:33.373522   32705 cri.go:135] skipping {e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a running}: state = "running", want "paused"
	I0817 21:20:33.373527   32705 cri.go:129] container: {ID:f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834b3930e2a3a830 Status:running}
	I0817 21:20:33.373533   32705 cri.go:131] skipping f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834b3930e2a3a830 - not in ps
	I0817 21:20:33.373581   32705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 21:20:33.384218   32705 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 21:20:33.384226   32705 kubeadm.go:636] restartCluster start
	I0817 21:20:33.384279   32705 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 21:20:33.394323   32705 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:20:33.394860   32705 kubeconfig.go:92] found "functional-545557" server: "https://192.168.49.2:8441"
	I0817 21:20:33.396506   32705 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 21:20:33.406699   32705 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-08-17 21:18:59.869413085 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-08-17 21:20:32.747872085 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0817 21:20:33.406709   32705 kubeadm.go:1128] stopping kube-system containers ...
	I0817 21:20:33.406719   32705 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 21:20:33.406780   32705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:20:33.457395   32705 cri.go:89] found id: "7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467"
	I0817 21:20:33.457407   32705 cri.go:89] found id: "b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e"
	I0817 21:20:33.457412   32705 cri.go:89] found id: "63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef"
	I0817 21:20:33.457425   32705 cri.go:89] found id: "dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f"
	I0817 21:20:33.457428   32705 cri.go:89] found id: "2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70"
	I0817 21:20:33.457432   32705 cri.go:89] found id: "32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279"
	I0817 21:20:33.457435   32705 cri.go:89] found id: "e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a"
	I0817 21:20:33.457439   32705 cri.go:89] found id: "bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745"
	I0817 21:20:33.457442   32705 cri.go:89] found id: "1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98"
	I0817 21:20:33.457447   32705 cri.go:89] found id: ""
	I0817 21:20:33.457451   32705 cri.go:234] Stopping containers: [7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467 b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e 63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f 2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70 32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745 1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98]
	I0817 21:20:33.457504   32705 ssh_runner.go:195] Run: which crictl
	I0817 21:20:33.461961   32705 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467 b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e 63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f 2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70 32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745 1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98
	I0817 21:20:38.657427   32705 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467 b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e 63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f 2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70 32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745 1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98: (5.195431437s)
	W0817 21:20:38.657479   32705 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467 b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e 63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f 2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70 32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745 1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98: Process exited with status 1
	stdout:
	7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467
	b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e
	
	stderr:
	E0817 21:20:38.654541    3437 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef\": not found" containerID="63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef"
	time="2023-08-17T21:20:38Z" level=fatal msg="stopping the container \"63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef\": not found"
	I0817 21:20:38.657536   32705 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 21:20:38.735084   32705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 21:20:38.746118   32705 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug 17 21:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 17 21:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug 17 21:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 17 21:19 /etc/kubernetes/scheduler.conf
	
	I0817 21:20:38.746177   32705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0817 21:20:38.757569   32705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0817 21:20:38.768324   32705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0817 21:20:38.779182   32705 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:20:38.779237   32705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 21:20:38.789627   32705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0817 21:20:38.800666   32705 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:20:38.800718   32705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 21:20:38.810757   32705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 21:20:38.821186   32705 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 21:20:38.821199   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:20:38.889194   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:20:42.458509   32705 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.569292433s)
	I0817 21:20:42.458526   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:20:42.667326   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:20:42.745649   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:20:42.826151   32705 api_server.go:52] waiting for apiserver process to appear ...
	I0817 21:20:42.826213   32705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:20:42.851211   32705 api_server.go:72] duration metric: took 25.060509ms to wait for apiserver process to appear ...
	I0817 21:20:42.851224   32705 api_server.go:88] waiting for apiserver healthz status ...
	I0817 21:20:42.851239   32705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 21:20:42.864171   32705 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0817 21:20:42.883600   32705 api_server.go:141] control plane version: v1.27.4
	I0817 21:20:42.883616   32705 api_server.go:131] duration metric: took 32.386878ms to wait for apiserver health ...
	I0817 21:20:42.883624   32705 cni.go:84] Creating CNI manager for ""
	I0817 21:20:42.883629   32705 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:20:42.886664   32705 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 21:20:42.892117   32705 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0817 21:20:42.899615   32705 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0817 21:20:42.899626   32705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0817 21:20:42.931957   32705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 21:20:43.416360   32705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 21:20:43.427481   32705 system_pods.go:59] 8 kube-system pods found
	I0817 21:20:43.427498   32705 system_pods.go:61] "coredns-5d78c9869d-gvljn" [0261b057-521f-49ff-8046-9c5967ae60f6] Running
	I0817 21:20:43.427502   32705 system_pods.go:61] "etcd-functional-545557" [c8074a9c-97a0-47a6-8cb8-05f491da1868] Running
	I0817 21:20:43.427506   32705 system_pods.go:61] "kindnet-9px5c" [7771d523-4a98-4e62-9edb-74b5468f95bf] Running
	I0817 21:20:43.427510   32705 system_pods.go:61] "kube-apiserver-functional-545557" [ca79484f-521a-4f20-b3cf-d0b7f7387643] Running
	I0817 21:20:43.427514   32705 system_pods.go:61] "kube-controller-manager-functional-545557" [89597ce9-2cdd-43da-bcad-5ee36ccbc01d] Running
	I0817 21:20:43.427518   32705 system_pods.go:61] "kube-proxy-rprkp" [c44cf9c6-7dd1-4380-a813-d1f4a1b9a298] Running
	I0817 21:20:43.427522   32705 system_pods.go:61] "kube-scheduler-functional-545557" [84382816-6f32-43d8-aeb4-af38fb8fe4d3] Running
	I0817 21:20:43.427530   32705 system_pods.go:61] "storage-provisioner" [c2ba22e9-5bd3-4857-869e-b25ea0e1a08d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 21:20:43.427540   32705 system_pods.go:74] duration metric: took 11.166872ms to wait for pod list to return data ...
	I0817 21:20:43.427547   32705 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:20:43.431017   32705 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0817 21:20:43.431034   32705 node_conditions.go:123] node cpu capacity is 2
	I0817 21:20:43.431044   32705 node_conditions.go:105] duration metric: took 3.493316ms to run NodePressure ...
	I0817 21:20:43.431059   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:20:43.665486   32705 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 21:20:43.686759   32705 retry.go:31] will retry after 282.637409ms: kubelet not initialised
	I0817 21:20:43.980769   32705 kubeadm.go:787] kubelet initialised
	I0817 21:20:43.980781   32705 kubeadm.go:788] duration metric: took 315.282451ms waiting for restarted kubelet to initialise ...
	I0817 21:20:43.980797   32705 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:20:43.993427   32705 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-gvljn" in "kube-system" namespace to be "Ready" ...
	I0817 21:20:45.060657   32705 pod_ready.go:97] error getting pod "coredns-5d78c9869d-gvljn" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-gvljn": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.060673   32705 pod_ready.go:81] duration metric: took 1.067231893s waiting for pod "coredns-5d78c9869d-gvljn" in "kube-system" namespace to be "Ready" ...
	E0817 21:20:45.060684   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5d78c9869d-gvljn" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-gvljn": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.060708   32705 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-545557" in "kube-system" namespace to be "Ready" ...
	I0817 21:20:45.060893   32705 pod_ready.go:97] error getting pod "etcd-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.060900   32705 pod_ready.go:81] duration metric: took 186.188µs waiting for pod "etcd-functional-545557" in "kube-system" namespace to be "Ready" ...
	E0817 21:20:45.060906   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.060918   32705 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-545557" in "kube-system" namespace to be "Ready" ...
	I0817 21:20:45.061064   32705 pod_ready.go:97] error getting pod "kube-apiserver-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061069   32705 pod_ready.go:81] duration metric: took 146.606µs waiting for pod "kube-apiserver-functional-545557" in "kube-system" namespace to be "Ready" ...
	E0817 21:20:45.061075   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061087   32705 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-545557" in "kube-system" namespace to be "Ready" ...
	I0817 21:20:45.061228   32705 pod_ready.go:97] error getting pod "kube-controller-manager-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061233   32705 pod_ready.go:81] duration metric: took 141.117µs waiting for pod "kube-controller-manager-functional-545557" in "kube-system" namespace to be "Ready" ...
	E0817 21:20:45.061240   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061249   32705 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rprkp" in "kube-system" namespace to be "Ready" ...
	I0817 21:20:45.061448   32705 pod_ready.go:97] error getting pod "kube-proxy-rprkp" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rprkp": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061454   32705 pod_ready.go:81] duration metric: took 200.604µs waiting for pod "kube-proxy-rprkp" in "kube-system" namespace to be "Ready" ...
	E0817 21:20:45.061460   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-rprkp" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rprkp": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061470   32705 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-545557" in "kube-system" namespace to be "Ready" ...
	I0817 21:20:45.061631   32705 pod_ready.go:97] error getting pod "kube-scheduler-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061639   32705 pod_ready.go:81] duration metric: took 162.598µs waiting for pod "kube-scheduler-functional-545557" in "kube-system" namespace to be "Ready" ...
	E0817 21:20:45.061644   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061654   32705 pod_ready.go:38] duration metric: took 1.080848003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:20:45.061670   32705 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W0817 21:20:45.075251   32705 kubeadm.go:796] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I0817 21:20:45.075268   32705 kubeadm.go:640] restartCluster took 11.691035861s
	I0817 21:20:45.075275   32705 kubeadm.go:406] StartCluster complete in 11.776332196s
	I0817 21:20:45.075291   32705 settings.go:142] acquiring lock: {Name:mk7a5a07825601654f691495799b769adb4489ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:45.075362   32705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:20:45.076118   32705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/kubeconfig: {Name:mkf341824bbe915f226637e75b19e0928287e2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:45.077543   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 21:20:45.077842   32705 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:20:45.077878   32705 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 21:20:45.077943   32705 addons.go:69] Setting storage-provisioner=true in profile "functional-545557"
	I0817 21:20:45.077957   32705 addons.go:231] Setting addon storage-provisioner=true in "functional-545557"
	W0817 21:20:45.077963   32705 addons.go:240] addon storage-provisioner should already be in state true
	I0817 21:20:45.078040   32705 host.go:66] Checking if "functional-545557" exists ...
	I0817 21:20:45.078514   32705 cli_runner.go:164] Run: docker container inspect functional-545557 --format={{.State.Status}}
	W0817 21:20:45.079343   32705 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-545557" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E0817 21:20:45.079369   32705 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.079461   32705 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0817 21:20:45.084341   32705 out.go:177] * Verifying Kubernetes components...
	I0817 21:20:45.080496   32705 addons.go:69] Setting default-storageclass=true in profile "functional-545557"
	I0817 21:20:45.084421   32705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-545557"
	I0817 21:20:45.084772   32705 cli_runner.go:164] Run: docker container inspect functional-545557 --format={{.State.Status}}
	I0817 21:20:45.088321   32705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:20:45.129609   32705 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:20:45.131919   32705 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:20:45.131932   32705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 21:20:45.132000   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	W0817 21:20:45.138974   32705 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8441: connect: connection refused]
	I0817 21:20:45.170964   32705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
	E0817 21:20:45.319035   32705 start.go:866] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0817 21:20:45.319060   32705 start.go:291] Unable to inject {"host.minikube.internal": 192.168.49.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0817 21:20:45.319074   32705 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I0817 21:20:45.319243   32705 node_ready.go:35] waiting up to 6m0s for node "functional-545557" to be "Ready" ...
	I0817 21:20:45.319634   32705 node_ready.go:53] error getting node "functional-545557": Get "https://192.168.49.2:8441/api/v1/nodes/functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.319643   32705 node_ready.go:38] duration metric: took 390.336µs waiting for node "functional-545557" to be "Ready" ...
	I0817 21:20:45.321374   32705 out.go:177] 
	W0817 21:20:45.323835   32705 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-545557": Get "https://192.168.49.2:8441/api/v1/nodes/functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	W0817 21:20:45.323988   32705 out.go:239] * 
	W0817 21:20:45.325227   32705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 21:20:45.328776   32705 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0aa7c042d6413       64aece92d6bde       4 seconds ago        Running             kube-apiserver            1                   d32a70d275515       kube-apiserver-functional-545557
	7e0f17a33e5fa       ba04bb24b9575       5 seconds ago        Running             storage-provisioner       2                   aadbab3334601       storage-provisioner
	ad092e1964ae8       97e04611ad434       5 seconds ago        Running             coredns                   1                   cf61adea89cb9       coredns-5d78c9869d-gvljn
	bfcb519b09ff2       64aece92d6bde       6 seconds ago        Exited              kube-apiserver            0                   d32a70d275515       kube-apiserver-functional-545557
	7e3284ab1f69a       ba04bb24b9575       46 seconds ago       Exited              storage-provisioner       1                   aadbab3334601       storage-provisioner
	b9329bd756ee5       97e04611ad434       About a minute ago   Exited              coredns                   0                   cf61adea89cb9       coredns-5d78c9869d-gvljn
	dff01c933c29d       b18bf71b941ba       About a minute ago   Running             kindnet-cni               0                   3e2b34e654e56       kindnet-9px5c
	2305529b95840       532e5a30e948f       About a minute ago   Running             kube-proxy                0                   f5307ce8a0a79       kube-proxy-rprkp
	e78322d1b35b2       6eb63895cb67f       About a minute ago   Running             kube-scheduler            0                   a4bf8cbafb685       kube-scheduler-functional-545557
	bfa7bd71ecb82       24bc64e911039       About a minute ago   Running             etcd                      0                   2fff0537dc1c6       etcd-functional-545557
	1a897bb2deb19       389f6f052cf83       About a minute ago   Running             kube-controller-manager   0                   da104c5e9d918       kube-controller-manager-functional-545557
	
	* 
	* ==> containerd <==
	* Aug 17 21:20:44 functional-545557 containerd[3236]: time="2023-08-17T21:20:44.128154314Z" level=info msg="StartContainer for \"7e0f17a33e5faa0fccb17b3b6a1bd0609346f3e4aa022cc7ed20f2e46440d14b\""
	Aug 17 21:20:44 functional-545557 containerd[3236]: time="2023-08-17T21:20:44.210580316Z" level=info msg="StartContainer for \"ad092e1964ae8dd8b14760be547953684a22aed7bcc004802c128444744a4fd6\" returns successfully"
	Aug 17 21:20:44 functional-545557 containerd[3236]: time="2023-08-17T21:20:44.231240495Z" level=info msg="StartContainer for \"7e0f17a33e5faa0fccb17b3b6a1bd0609346f3e4aa022cc7ed20f2e46440d14b\" returns successfully"
	Aug 17 21:20:44 functional-545557 containerd[3236]: time="2023-08-17T21:20:44.928244604Z" level=info msg="StopContainer for \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\" with timeout 1 (s)"
	Aug 17 21:20:44 functional-545557 containerd[3236]: time="2023-08-17T21:20:44.928912581Z" level=info msg="Stop container \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\" with signal terminated"
	Aug 17 21:20:44 functional-545557 containerd[3236]: time="2023-08-17T21:20:44.976507478Z" level=info msg="CreateContainer within sandbox \"d32a70d275515c00a2edf7e6a29850ef3e800f6ba0123019bc20ee0d0a247cec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.011631172Z" level=info msg="shim disconnected" id=6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.011963417Z" level=warning msg="cleaning up after shim disconnected" id=6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637 namespace=k8s.io
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.012068728Z" level=info msg="cleaning up dead shim"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.038112622Z" level=warning msg="cleanup warnings time=\"2023-08-17T21:20:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3825 runtime=io.containerd.runc.v2\n"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.040379956Z" level=info msg="CreateContainer within sandbox \"d32a70d275515c00a2edf7e6a29850ef3e800f6ba0123019bc20ee0d0a247cec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"0aa7c042d64137fda5cae0e40f0b7f8d6edc31522f7a2073a65c0dbd7c2cdf90\""
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.041149535Z" level=info msg="StartContainer for \"0aa7c042d64137fda5cae0e40f0b7f8d6edc31522f7a2073a65c0dbd7c2cdf90\""
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.071900871Z" level=info msg="shim disconnected" id=32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.072189154Z" level=warning msg="cleaning up after shim disconnected" id=32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 namespace=k8s.io
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.072280557Z" level=info msg="cleaning up dead shim"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.145534682Z" level=warning msg="cleanup warnings time=\"2023-08-17T21:20:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3867 runtime=io.containerd.runc.v2\n"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.172455837Z" level=info msg="StopContainer for \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\" returns successfully"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.173609812Z" level=info msg="StopPodSandbox for \"6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637\""
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.174017460Z" level=info msg="Container to stop \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.174285780Z" level=info msg="TearDown network for sandbox \"6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637\" successfully"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.174388186Z" level=info msg="StopPodSandbox for \"6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637\" returns successfully"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.238912305Z" level=info msg="StartContainer for \"0aa7c042d64137fda5cae0e40f0b7f8d6edc31522f7a2073a65c0dbd7c2cdf90\" returns successfully"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.982733823Z" level=info msg="RemoveContainer for \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\""
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.988728859Z" level=info msg="RemoveContainer for \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\" returns successfully"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.989659822Z" level=error msg="ContainerStatus for \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\": not found"
	
	* 
	* ==> coredns [ad092e1964ae8dd8b14760be547953684a22aed7bcc004802c128444744a4fd6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41357 - 29451 "HINFO IN 7262825344432124189.578214572530978332. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.06043919s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	
	* 
	* ==> coredns [b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55108 - 22279 "HINFO IN 1190291433924974191.7624145215812639463. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.075802328s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-545557
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-545557
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=functional-545557
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T21_19_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:19:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-545557
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 21:20:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:20:43 +0000   Thu, 17 Aug 2023 21:19:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:20:43 +0000   Thu, 17 Aug 2023 21:19:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:20:43 +0000   Thu, 17 Aug 2023 21:19:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:20:43 +0000   Thu, 17 Aug 2023 21:20:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-545557
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	System Info:
	  Machine ID:                 32a6d466b0d2406d9b1adac220bb80a0
	  System UUID:                a6e31771-80c9-484e-8995-d9d277994702
	  Boot ID:                    da56fcbe-e8d4-44e4-8927-1925d04822e5
	  Kernel Version:             5.15.0-1041-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-gvljn                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     79s
	  kube-system                 etcd-functional-545557                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kindnet-9px5c                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      79s
	  kube-system                 kube-apiserver-functional-545557             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kube-controller-manager-functional-545557    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-rprkp                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-functional-545557             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 77s                  kube-proxy       
	  Normal  Starting                 101s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  100s (x8 over 100s)  kubelet          Node functional-545557 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x8 over 100s)  kubelet          Node functional-545557 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x7 over 100s)  kubelet          Node functional-545557 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 93s                  kubelet          Starting kubelet.
	  Normal  NodeNotReady             92s                  kubelet          Node functional-545557 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    92s                  kubelet          Node functional-545557 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s                  kubelet          Node functional-545557 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                92s                  kubelet          Node functional-545557 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  92s                  kubelet          Node functional-545557 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           80s                  node-controller  Node functional-545557 event: Registered Node functional-545557 in Controller
	  Normal  Starting                 8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                   kubelet          Node functional-545557 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                   kubelet          Node functional-545557 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                   kubelet          Node functional-545557 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             8s                   kubelet          Node functional-545557 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7s                   kubelet          Node functional-545557 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug17 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015730] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.269498] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.619452] kauditd_printk_skb: 26 callbacks suppressed
	
	* 
	* ==> etcd [bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745] <==
	* {"level":"info","ts":"2023-08-17T21:19:10.854Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-17T21:19:10.855Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-17T21:19:10.855Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-17T21:19:10.856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-08-17T21:19:10.856Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-08-17T21:19:10.856Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-17T21:19:10.857Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-08-17T21:19:11.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-17T21:19:11.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-17T21:19:11.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-08-17T21:19:11.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-08-17T21:19:11.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-08-17T21:19:11.142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-08-17T21:19:11.142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-08-17T21:19:11.143Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-545557 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-17T21:19:11.144Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T21:19:11.145Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-08-17T21:19:11.145Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:19:11.145Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T21:19:11.151Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:19:11.152Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:19:11.152Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:19:11.153Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-17T21:19:11.171Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-17T21:19:11.199Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  21:20:50 up  1:03,  0 users,  load average: 0.75, 0.96, 0.56
	Linux functional-545557 5.15.0-1041-aws #46~20.04.1-Ubuntu SMP Wed Jul 19 15:39:29 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f] <==
	* I0817 21:19:32.888608       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0817 21:19:32.888676       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0817 21:19:32.888793       1 main.go:116] setting mtu 1500 for CNI 
	I0817 21:19:32.888841       1 main.go:146] kindnetd IP family: "ipv4"
	I0817 21:19:32.888877       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0817 21:19:33.384810       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:19:33.384844       1 main.go:227] handling current node
	I0817 21:19:43.401699       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:19:43.401732       1 main.go:227] handling current node
	I0817 21:19:53.412784       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:19:53.412817       1 main.go:227] handling current node
	I0817 21:20:03.417246       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:20:03.417275       1 main.go:227] handling current node
	I0817 21:20:13.428244       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:20:13.428274       1 main.go:227] handling current node
	I0817 21:20:23.438196       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:20:23.438221       1 main.go:227] handling current node
	I0817 21:20:33.450125       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:20:33.450157       1 main.go:227] handling current node
	I0817 21:20:43.462797       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:20:43.462822       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [0aa7c042d64137fda5cae0e40f0b7f8d6edc31522f7a2073a65c0dbd7c2cdf90] <==
	* I0817 21:20:47.825969       1 naming_controller.go:291] Starting NamingConditionController
	I0817 21:20:47.825978       1 establishing_controller.go:76] Starting EstablishingController
	I0817 21:20:47.825986       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0817 21:20:47.825993       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0817 21:20:47.826001       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0817 21:20:47.839895       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0817 21:20:48.078273       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0817 21:20:48.529456       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 21:20:48.605279       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0817 21:20:48.606062       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0817 21:20:48.617991       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 21:20:48.618353       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 21:20:48.667074       1 shared_informer.go:318] Caches are synced for configmaps
	I0817 21:20:48.669335       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0817 21:20:48.669363       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0817 21:20:48.669342       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0817 21:20:48.678710       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0817 21:20:48.679073       1 aggregator.go:152] initial CRD sync complete...
	I0817 21:20:48.679129       1 autoregister_controller.go:141] Starting autoregister controller
	I0817 21:20:48.679169       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0817 21:20:48.679227       1 cache.go:39] Caches are synced for autoregister controller
	I0817 21:20:48.706335       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0817 21:20:48.827678       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0817 21:20:49.395293       1 controller.go:624] quota admission added evaluator for: endpoints
	I0817 21:20:49.397722       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [bfcb519b09ff2090f2611d5a2f2cd5c0400539fd38faa3c63178b9c1150674e3] <==
	* I0817 21:20:43.949289       1 server.go:553] external host was not specified, using 192.168.49.2
	I0817 21:20:43.950453       1 server.go:166] Version: v1.27.4
	I0817 21:20:43.950570       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0817 21:20:43.951061       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98] <==
	* E0817 21:20:48.415279       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:59550->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415349       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:59348->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415414       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59250->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415447       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.VolumeAttachment: unknown (get volumeattachments.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59286->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415515       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:59780->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415588       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Secret: unknown (get secrets) - error from a previous attempt: read tcp 192.168.49.2:59208->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415624       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: unknown (get runtimeclasses.node.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59750->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415689       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59688->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.447252       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:59528->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.447392       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PriorityClass: unknown (get priorityclasses.scheduling.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59226->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.447592       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:59322->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.447679       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.IngressClass: unknown (get ingressclasses.networking.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59362->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.447891       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ResourceQuota: unknown (get resourcequotas) - error from a previous attempt: read tcp 192.168.49.2:59380->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448110       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59398->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448265       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CertificateSigningRequest: unknown (get certificatesigningrequests.certificates.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59312->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448336       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Ingress: unknown (get ingresses.networking.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59450->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448392       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.NetworkPolicy: unknown (get networkpolicies.networking.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59452->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448574       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59462->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448637       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RoleBinding: unknown (get rolebindings.rbac.authorization.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59476->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448681       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:59484->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448833       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:59500->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448878       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodTemplate: unknown (get podtemplates) - error from a previous attempt: read tcp 192.168.49.2:59512->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448910       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v2.HorizontalPodAutoscaler: unknown (get horizontalpodautoscalers.autoscaling) - error from a previous attempt: read tcp 192.168.49.2:59514->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.449090       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ServiceAccount: unknown (get serviceaccounts) - error from a previous attempt: read tcp 192.168.49.2:59520->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.452310       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59668->192.168.49.2:8441: read: connection reset by peer
	
	* 
	* ==> kube-proxy [2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70] <==
	* I0817 21:19:32.763554       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0817 21:19:32.763767       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0817 21:19:32.763848       1 server_others.go:554] "Using iptables proxy"
	I0817 21:19:32.807748       1 server_others.go:192] "Using iptables Proxier"
	I0817 21:19:32.807791       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0817 21:19:32.807800       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0817 21:19:32.807813       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0817 21:19:32.807831       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 21:19:32.808524       1 server.go:658] "Version info" version="v1.27.4"
	I0817 21:19:32.808536       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 21:19:32.812552       1 config.go:188] "Starting service config controller"
	I0817 21:19:32.812574       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 21:19:32.812598       1 config.go:97] "Starting endpoint slice config controller"
	I0817 21:19:32.812603       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 21:19:32.821280       1 config.go:315] "Starting node config controller"
	I0817 21:19:32.821319       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 21:19:32.913484       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0817 21:19:32.913531       1 shared_informer.go:318] Caches are synced for service config
	I0817 21:19:32.921945       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a] <==
	* E0817 21:19:15.020818       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0817 21:19:15.020523       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 21:19:15.021031       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0817 21:19:15.020563       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0817 21:19:15.022898       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0817 21:19:15.020466       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 21:19:15.023091       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0817 21:19:15.023427       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 21:19:15.023575       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0817 21:19:16.306576       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0817 21:20:48.390183       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:59202->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.442986       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:59258->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.443262       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:59268->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.443443       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59294->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.443602       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:59300->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.443769       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:59332->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.443929       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:59344->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.444089       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:59350->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.444248       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:59218->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.444408       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:59376->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.444573       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:59238->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.444737       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59418->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.444888       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:59396->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.447779       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59404->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.453109       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59432->192.168.49.2:8441: read: connection reset by peer
	
	* 
	* ==> kubelet <==
	* Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/cpuset/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/systemd/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/net_cls,net_prio/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/perf_event/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/unified/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/memory/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/blkio/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/net_cls,net_prio/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/freezer/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/devices/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: I0817 21:20:44.973725    3566 scope.go:115] "RemoveContainer" containerID="bfcb519b09ff2090f2611d5a2f2cd5c0400539fd38faa3c63178b9c1150674e3"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: E0817 21:20:44.997177    3566 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-545557.177c487918bcaef6", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-545557", UID:"31484f02d94a16c70ea16af89113e3b3", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Killing", Message:"Stopping container kube-apiserver", Source:v1.EventSource{Component:"kubelet", Hos
t:"functional-545557"}, FirstTimestamp:time.Date(2023, time.August, 17, 21, 20, 44, 927725302, time.Local), LastTimestamp:time.Date(2023, time.August, 17, 21, 20, 44, 927725302, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": read tcp 192.168.49.2:59180->192.168.49.2:8441: read: connection reset by peer'(may retry after sleeping)
	Aug 17 21:20:45 functional-545557 kubelet[3566]: I0817 21:20:45.980660    3566 scope.go:115] "RemoveContainer" containerID="32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279"
	Aug 17 21:20:45 functional-545557 kubelet[3566]: I0817 21:20:45.989276    3566 scope.go:115] "RemoveContainer" containerID="32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279"
	Aug 17 21:20:45 functional-545557 kubelet[3566]: E0817 21:20:45.990018    3566 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\": not found" containerID="32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279"
	Aug 17 21:20:45 functional-545557 kubelet[3566]: I0817 21:20:45.990144    3566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279} err="failed to get container status \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\": rpc error: code = NotFound desc = an error occurred when try to find container \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\": not found"
	Aug 17 21:20:46 functional-545557 kubelet[3566]: I0817 21:20:46.927532    3566 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=31484f02d94a16c70ea16af89113e3b3 path="/var/lib/kubelet/pods/31484f02d94a16c70ea16af89113e3b3/volumes"
	Aug 17 21:20:48 functional-545557 kubelet[3566]: E0817 21:20:48.348014    3566 reflector.go:148] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:59804->192.168.49.2:8441: read: connection reset by peer
	Aug 17 21:20:48 functional-545557 kubelet[3566]: E0817 21:20:48.348091    3566 reflector.go:148] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:59832->192.168.49.2:8441: read: connection reset by peer
	Aug 17 21:20:48 functional-545557 kubelet[3566]: E0817 21:20:48.383464    3566 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:59902->192.168.49.2:8441: read: connection reset by peer
	Aug 17 21:20:49 functional-545557 kubelet[3566]: I0817 21:20:49.147102    3566 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Aug 17 21:20:49 functional-545557 kubelet[3566]: I0817 21:20:49.534766    3566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-545557" podStartSLOduration=5.534707699 podCreationTimestamp="2023-08-17 21:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-17 21:20:44.075883843 +0000 UTC m=+1.412825071" watchObservedRunningTime="2023-08-17 21:20:49.534707699 +0000 UTC m=+6.871648968"
	
	* 
	* ==> storage-provisioner [7e0f17a33e5faa0fccb17b3b6a1bd0609346f3e4aa022cc7ed20f2e46440d14b] <==
	* I0817 21:20:44.241508       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 21:20:44.255255       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 21:20:44.255487       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467] <==
	* I0817 21:20:03.249705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 21:20:03.264454       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 21:20:03.264766       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 21:20:03.277358       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 21:20:03.278074       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-545557_a12882bc-013e-42bd-9b5c-81d65d67df19!
	I0817 21:20:03.278647       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0829c91a-ee07-421e-8127-a4a6a2ff64cb", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-545557_a12882bc-013e-42bd-9b5c-81d65d67df19 became leader
	I0817 21:20:03.379080       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-545557_a12882bc-013e-42bd-9b5c-81d65d67df19!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-545557 -n functional-545557
helpers_test.go:261: (dbg) Run:  kubectl --context functional-545557 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ExtraConfig FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ExtraConfig (32.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-545557 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2023-08-17 21:20:43 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0x400101fae8 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0x4000212230} Ready:false RestartCount:1 Image:registry.k8s.io/kube-apiserver:v1.27.4 ImageID:registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d ContainerID:containerd://0aa7c042d64137fda5cae0e40f0b7f8d6edc31522f7a2073a65c0dbd7c2cdf90}]}
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:829: kube-controller-manager is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2023-08-17 21:20:43 +0000 UTC ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:0x400101fd40 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:registry.k8s.io/kube-controller-manager:v1.27.4 ImageID:registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265 ContainerID:containerd://1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98}]}
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:829: kube-scheduler is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2023-08-17 21:20:43 +0000 UTC ContainerStatuses:[{Name:kube-scheduler State:{Waiting:<nil> Running:0x400101fef0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:registry.k8s.io/kube-scheduler:v1.27.4 ImageID:registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af ContainerID:containerd://e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a}]}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-545557
helpers_test.go:235: (dbg) docker inspect functional-545557:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c06a3ae11e991768f372447db4fb552fcac48ff2dc018a5081930d69e7a3f72c",
	        "Created": "2023-08-17T21:18:52.25158104Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 28949,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-17T21:18:52.582984646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/c06a3ae11e991768f372447db4fb552fcac48ff2dc018a5081930d69e7a3f72c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c06a3ae11e991768f372447db4fb552fcac48ff2dc018a5081930d69e7a3f72c/hostname",
	        "HostsPath": "/var/lib/docker/containers/c06a3ae11e991768f372447db4fb552fcac48ff2dc018a5081930d69e7a3f72c/hosts",
	        "LogPath": "/var/lib/docker/containers/c06a3ae11e991768f372447db4fb552fcac48ff2dc018a5081930d69e7a3f72c/c06a3ae11e991768f372447db4fb552fcac48ff2dc018a5081930d69e7a3f72c-json.log",
	        "Name": "/functional-545557",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-545557:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-545557",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e498af663c4d416169d5e6a62154f3e2381400cdb280a92b111990dd0a57e506-init/diff:/var/lib/docker/overlay2/6e6597fd944d5f98ecbe7d9c5301a949ba6526f8982591cdfcbe3d11f113be4a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e498af663c4d416169d5e6a62154f3e2381400cdb280a92b111990dd0a57e506/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e498af663c4d416169d5e6a62154f3e2381400cdb280a92b111990dd0a57e506/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e498af663c4d416169d5e6a62154f3e2381400cdb280a92b111990dd0a57e506/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-545557",
	                "Source": "/var/lib/docker/volumes/functional-545557/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-545557",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-545557",
	                "name.minikube.sigs.k8s.io": "functional-545557",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c939595be956f9545700a7ca6f3b0ab775a2af0eb8f59fce848e9993634c2060",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c939595be956",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-545557": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c06a3ae11e99",
	                        "functional-545557"
	                    ],
	                    "NetworkID": "dd61593c3ab6fad02d296091bc6a0eb0155487ace79ece538735f5614eb170a3",
	                    "EndpointID": "d38046714a1dc43438efa47f8e9a7688ebff5054687ec97329b702a516ffd5f3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-545557 -n functional-545557
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-545557 logs -n 25: (1.567295966s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-112266 --log_dir                                                  | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	|         | /tmp/nospam-112266 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-112266 --log_dir                                                  | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	|         | /tmp/nospam-112266 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-112266 --log_dir                                                  | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	|         | /tmp/nospam-112266 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-112266 --log_dir                                                  | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	|         | /tmp/nospam-112266 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-112266 --log_dir                                                  | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	|         | /tmp/nospam-112266 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-112266 --log_dir                                                  | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	|         | /tmp/nospam-112266 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-112266                                                         | nospam-112266     | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:18 UTC |
	| start   | -p functional-545557                                                     | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:18 UTC | 17 Aug 23 21:19 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start   | -p functional-545557                                                     | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:20 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-545557 cache add                                              | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-545557 cache add                                              | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-545557 cache add                                              | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-545557 cache add                                              | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | minikube-local-cache-test:functional-545557                              |                   |         |         |                     |                     |
	| cache   | functional-545557 cache delete                                           | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | minikube-local-cache-test:functional-545557                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	| ssh     | functional-545557 ssh sudo                                               | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-545557                                                        | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-545557 ssh                                                    | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-545557 cache reload                                           | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	| ssh     | functional-545557 ssh                                                    | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-545557 kubectl --                                             | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC | 17 Aug 23 21:20 UTC |
	|         | --context functional-545557                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-545557                                                     | functional-545557 | jenkins | v1.31.2 | 17 Aug 23 21:20 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:20:18
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:20:18.497074   32705 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:20:18.497250   32705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:20:18.497254   32705 out.go:309] Setting ErrFile to fd 2...
	I0817 21:20:18.497259   32705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:20:18.497501   32705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	I0817 21:20:18.497839   32705 out.go:303] Setting JSON to false
	I0817 21:20:18.498735   32705 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3757,"bootTime":1692303461,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0817 21:20:18.498788   32705 start.go:138] virtualization:  
	I0817 21:20:18.501773   32705 out.go:177] * [functional-545557] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0817 21:20:18.503863   32705 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:20:18.505863   32705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:20:18.503946   32705 notify.go:220] Checking for updates...
	I0817 21:20:18.509867   32705 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:20:18.512331   32705 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	I0817 21:20:18.514728   32705 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 21:20:18.516773   32705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:20:18.519299   32705 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:20:18.519383   32705 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:20:18.545164   32705 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:20:18.545273   32705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:20:18.654557   32705 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-08-17 21:20:18.644221049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:20:18.654709   32705 docker.go:294] overlay module found
	I0817 21:20:18.656853   32705 out.go:177] * Using the docker driver based on existing profile
	I0817 21:20:18.658784   32705 start.go:298] selected driver: docker
	I0817 21:20:18.658791   32705 start.go:902] validating driver "docker" against &{Name:functional-545557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-545557 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:20:18.658883   32705 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:20:18.658975   32705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:20:18.734308   32705 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-08-17 21:20:18.724813076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:20:18.734784   32705 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 21:20:18.734822   32705 cni.go:84] Creating CNI manager for ""
	I0817 21:20:18.734828   32705 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:20:18.734838   32705 start_flags.go:319] config:
	{Name:functional-545557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-545557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:20:18.737393   32705 out.go:177] * Starting control plane node functional-545557 in cluster functional-545557
	I0817 21:20:18.739324   32705 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0817 21:20:18.741371   32705 out.go:177] * Pulling base image ...
	I0817 21:20:18.743469   32705 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime containerd
	I0817 21:20:18.743519   32705 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4
	I0817 21:20:18.743526   32705 cache.go:57] Caching tarball of preloaded images
	I0817 21:20:18.743546   32705 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:20:18.743603   32705 preload.go:174] Found /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 21:20:18.743611   32705 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on containerd
	I0817 21:20:18.743732   32705 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/config.json ...
	I0817 21:20:18.763701   32705 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0817 21:20:18.763717   32705 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0817 21:20:18.763738   32705 cache.go:195] Successfully downloaded all kic artifacts
	I0817 21:20:18.763772   32705 start.go:365] acquiring machines lock for functional-545557: {Name:mk992cc35a6d639d46bb46397ccce45c20f3d9da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:20:18.763839   32705 start.go:369] acquired machines lock for "functional-545557" in 49.944µs
	I0817 21:20:18.763858   32705 start.go:96] Skipping create...Using existing machine configuration
	I0817 21:20:18.763862   32705 fix.go:54] fixHost starting: 
	I0817 21:20:18.764122   32705 cli_runner.go:164] Run: docker container inspect functional-545557 --format={{.State.Status}}
	I0817 21:20:18.781969   32705 fix.go:102] recreateIfNeeded on functional-545557: state=Running err=<nil>
	W0817 21:20:18.781996   32705 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 21:20:18.784354   32705 out.go:177] * Updating the running docker "functional-545557" container ...
	I0817 21:20:18.786342   32705 machine.go:88] provisioning docker machine ...
	I0817 21:20:18.786397   32705 ubuntu.go:169] provisioning hostname "functional-545557"
	I0817 21:20:18.786468   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:18.808918   32705 main.go:141] libmachine: Using SSH client type: native
	I0817 21:20:18.809396   32705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0817 21:20:18.809410   32705 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-545557 && echo "functional-545557" | sudo tee /etc/hostname
	I0817 21:20:18.953042   32705 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-545557
	
	I0817 21:20:18.953104   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:18.972530   32705 main.go:141] libmachine: Using SSH client type: native
	I0817 21:20:18.972943   32705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0817 21:20:18.972959   32705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-545557' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-545557/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-545557' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:20:19.103678   32705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:20:19.103693   32705 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-2431/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-2431/.minikube}
	I0817 21:20:19.103719   32705 ubuntu.go:177] setting up certificates
	I0817 21:20:19.103727   32705 provision.go:83] configureAuth start
	I0817 21:20:19.103781   32705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-545557
	I0817 21:20:19.125560   32705 provision.go:138] copyHostCerts
	I0817 21:20:19.125624   32705 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem, removing ...
	I0817 21:20:19.125651   32705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem
	I0817 21:20:19.125732   32705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem (1078 bytes)
	I0817 21:20:19.125819   32705 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem, removing ...
	I0817 21:20:19.125823   32705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem
	I0817 21:20:19.125847   32705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem (1123 bytes)
	I0817 21:20:19.125896   32705 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem, removing ...
	I0817 21:20:19.125899   32705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem
	I0817 21:20:19.125920   32705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem (1675 bytes)
	I0817 21:20:19.125961   32705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem org=jenkins.functional-545557 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-545557]
	I0817 21:20:20.395343   32705 provision.go:172] copyRemoteCerts
	I0817 21:20:20.395397   32705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:20:20.395433   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:20.413864   32705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
	I0817 21:20:20.509267   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:20:20.537755   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0817 21:20:20.566049   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 21:20:20.593660   32705 provision.go:86] duration metric: configureAuth took 1.489920942s
	I0817 21:20:20.593675   32705 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:20:20.593858   32705 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:20:20.593863   32705 machine.go:91] provisioned docker machine in 1.807515988s
	I0817 21:20:20.593869   32705 start.go:300] post-start starting for "functional-545557" (driver="docker")
	I0817 21:20:20.593877   32705 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:20:20.593921   32705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:20:20.593959   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:20.611775   32705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
	I0817 21:20:20.704980   32705 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:20:20.708983   32705 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:20:20.709007   32705 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:20:20.709016   32705 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:20:20.709021   32705 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0817 21:20:20.709029   32705 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/addons for local assets ...
	I0817 21:20:20.709082   32705 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/files for local assets ...
	I0817 21:20:20.709156   32705 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem -> 77452.pem in /etc/ssl/certs
	I0817 21:20:20.709241   32705 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/test/nested/copy/7745/hosts -> hosts in /etc/test/nested/copy/7745
	I0817 21:20:20.709280   32705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/7745
	I0817 21:20:20.719408   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem --> /etc/ssl/certs/77452.pem (1708 bytes)
	I0817 21:20:20.748127   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/test/nested/copy/7745/hosts --> /etc/test/nested/copy/7745/hosts (40 bytes)
	I0817 21:20:20.776565   32705 start.go:303] post-start completed in 182.681899ms
	I0817 21:20:20.776633   32705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:20:20.776670   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:20.795644   32705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
	I0817 21:20:20.884792   32705 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:20:20.890529   32705 fix.go:56] fixHost completed within 2.126658749s
	I0817 21:20:20.890542   32705 start.go:83] releasing machines lock for "functional-545557", held for 2.126696311s
	I0817 21:20:20.890607   32705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-545557
	I0817 21:20:20.907960   32705 ssh_runner.go:195] Run: cat /version.json
	I0817 21:20:20.908002   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:20.908223   32705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:20:20.908257   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:20:20.941480   32705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
	I0817 21:20:20.952252   32705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
	I0817 21:20:21.039296   32705 ssh_runner.go:195] Run: systemctl --version
	I0817 21:20:21.178564   32705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:20:21.184499   32705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0817 21:20:21.206761   32705 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0817 21:20:21.206827   32705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:20:21.218211   32705 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0817 21:20:21.218225   32705 start.go:466] detecting cgroup driver to use...
	I0817 21:20:21.218256   32705 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0817 21:20:21.218306   32705 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0817 21:20:21.233098   32705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0817 21:20:21.247563   32705 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:20:21.247619   32705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:20:21.263491   32705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:20:21.277249   32705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:20:21.402193   32705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:20:21.524820   32705 docker.go:212] disabling docker service ...
	I0817 21:20:21.524875   32705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:20:21.540617   32705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:20:21.554596   32705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:20:21.675588   32705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:20:21.790508   32705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:20:21.805096   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:20:21.826081   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0817 21:20:21.838592   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0817 21:20:21.851018   32705 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0817 21:20:21.851073   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0817 21:20:21.863903   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:20:21.876311   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0817 21:20:21.888814   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:20:21.901691   32705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:20:21.915079   32705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0817 21:20:21.928067   32705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:20:21.938349   32705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:20:21.948906   32705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:20:22.069483   32705 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0817 21:20:22.176773   32705 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 21:20:22.176838   32705 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0817 21:20:22.181895   32705 start.go:534] Will wait 60s for crictl version
	I0817 21:20:22.181953   32705 ssh_runner.go:195] Run: which crictl
	I0817 21:20:22.186576   32705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:20:22.235604   32705 retry.go:31] will retry after 10.159626752s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:20:22Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 21:20:32.397411   32705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:20:32.440762   32705 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0817 21:20:32.440819   32705 ssh_runner.go:195] Run: containerd --version
	I0817 21:20:32.470084   32705 ssh_runner.go:195] Run: containerd --version
	I0817 21:20:32.511198   32705 out.go:177] * Preparing Kubernetes v1.27.4 on containerd 1.6.21 ...
	I0817 21:20:32.516945   32705 cli_runner.go:164] Run: docker network inspect functional-545557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:20:32.534170   32705 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 21:20:32.541262   32705 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0817 21:20:32.543193   32705 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime containerd
	I0817 21:20:32.543274   32705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:20:32.584068   32705 containerd.go:604] all images are preloaded for containerd runtime.
	I0817 21:20:32.584078   32705 containerd.go:518] Images already preloaded, skipping extraction
	I0817 21:20:32.584133   32705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:20:32.624207   32705 containerd.go:604] all images are preloaded for containerd runtime.
	I0817 21:20:32.624217   32705 cache_images.go:84] Images are preloaded, skipping loading
	I0817 21:20:32.624281   32705 ssh_runner.go:195] Run: sudo crictl info
	I0817 21:20:32.668222   32705 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0817 21:20:32.668245   32705 cni.go:84] Creating CNI manager for ""
	I0817 21:20:32.668252   32705 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:20:32.668261   32705 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:20:32.668278   32705 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-545557 NodeName:functional-545557 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 21:20:32.668402   32705 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-545557"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:20:32.668466   32705 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-545557 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:functional-545557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0817 21:20:32.668528   32705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 21:20:32.679425   32705 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:20:32.679487   32705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 21:20:32.690090   32705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0817 21:20:32.712015   32705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 21:20:32.733261   32705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1956 bytes)
	I0817 21:20:32.754895   32705 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 21:20:32.759442   32705 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557 for IP: 192.168.49.2
	I0817 21:20:32.759462   32705 certs.go:190] acquiring lock for shared ca certs: {Name:mk058988a603cd06c6d056488c4bdaf60bd886a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:32.759587   32705 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.key
	I0817 21:20:32.759637   32705 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.key
	I0817 21:20:32.759706   32705 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.key
	I0817 21:20:32.759754   32705 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/apiserver.key.dd3b5fb2
	I0817 21:20:32.759793   32705 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/proxy-client.key
	I0817 21:20:32.759919   32705 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/7745.pem (1338 bytes)
	W0817 21:20:32.759946   32705 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/7745_empty.pem, impossibly tiny 0 bytes
	I0817 21:20:32.759953   32705 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem (1675 bytes)
	I0817 21:20:32.759976   32705 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem (1078 bytes)
	I0817 21:20:32.759999   32705 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:20:32.760020   32705 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem (1675 bytes)
	I0817 21:20:32.760061   32705 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem (1708 bytes)
	I0817 21:20:32.760709   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 21:20:32.792401   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 21:20:32.822206   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 21:20:32.852200   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 21:20:32.881600   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:20:32.912298   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 21:20:32.948085   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:20:32.976604   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 21:20:33.005865   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:20:33.039294   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/certs/7745.pem --> /usr/share/ca-certificates/7745.pem (1338 bytes)
	I0817 21:20:33.070087   32705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem --> /usr/share/ca-certificates/77452.pem (1708 bytes)
	I0817 21:20:33.100415   32705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 21:20:33.123730   32705 ssh_runner.go:195] Run: openssl version
	I0817 21:20:33.130979   32705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:20:33.143031   32705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:20:33.147546   32705 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:12 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:20:33.147602   32705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:20:33.156425   32705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:20:33.167573   32705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7745.pem && ln -fs /usr/share/ca-certificates/7745.pem /etc/ssl/certs/7745.pem"
	I0817 21:20:33.179489   32705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7745.pem
	I0817 21:20:33.183984   32705 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:18 /usr/share/ca-certificates/7745.pem
	I0817 21:20:33.184046   32705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7745.pem
	I0817 21:20:33.192703   32705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7745.pem /etc/ssl/certs/51391683.0"
	I0817 21:20:33.203894   32705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77452.pem && ln -fs /usr/share/ca-certificates/77452.pem /etc/ssl/certs/77452.pem"
	I0817 21:20:33.217449   32705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77452.pem
	I0817 21:20:33.223740   32705 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:18 /usr/share/ca-certificates/77452.pem
	I0817 21:20:33.223798   32705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77452.pem
	I0817 21:20:33.232590   32705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77452.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 21:20:33.243980   32705 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:20:33.248341   32705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 21:20:33.256551   32705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 21:20:33.265049   32705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 21:20:33.273641   32705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 21:20:33.282239   32705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 21:20:33.290604   32705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 21:20:33.298952   32705 kubeadm.go:404] StartCluster: {Name:functional-545557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-545557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:20:33.299042   32705 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 21:20:33.299097   32705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:20:33.339893   32705 cri.go:89] found id: "7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467"
	I0817 21:20:33.339904   32705 cri.go:89] found id: "b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e"
	I0817 21:20:33.339908   32705 cri.go:89] found id: "63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef"
	I0817 21:20:33.339912   32705 cri.go:89] found id: "dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f"
	I0817 21:20:33.339915   32705 cri.go:89] found id: "2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70"
	I0817 21:20:33.339919   32705 cri.go:89] found id: "32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279"
	I0817 21:20:33.339922   32705 cri.go:89] found id: "e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a"
	I0817 21:20:33.339925   32705 cri.go:89] found id: "bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745"
	I0817 21:20:33.339928   32705 cri.go:89] found id: "1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98"
	I0817 21:20:33.339934   32705 cri.go:89] found id: ""
	I0817 21:20:33.339992   32705 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 21:20:33.373084   32705 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98","pid":1260,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98/rootfs","created":"2023-08-17T21:19:10.763298615Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.27.4","io.kubernetes.cri.sandbox-id":"da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b136726eb5b054d3573cf1ee701d51d8"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70","pid":1791,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70/rootfs","created":"2023-08-17T21:19:32.677621896Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.27.4","io.kubernetes.cri.sandbox-id":"f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834b3930e2a3a830","io.kubernetes.cri.sandbox-name":"kube-proxy-rprkp","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c44cf9c6-7dd1-4380-a813-d1f4a1b9a298"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06","pid":1121,"status":"running","bundl
e":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06/rootfs","created":"2023-08-17T21:19:10.53042668Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-545557_3a287b5aac4530715622a5e33a1287a5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3a287b5aac4530715622a5e33a1287a5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"32defc4cba5123b96a7769d6baac655e3eef771
543100a5600f2c0cf46286279","pid":1329,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279/rootfs","created":"2023-08-17T21:19:10.899584799Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.27.4","io.kubernetes.cri.sandbox-id":"6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"31484f02d94a16c70ea16af89113e3b3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934","pid":1709,"status":"running","bundle":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934/rootfs","created":"2023-08-17T21:19:32.529879947Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9px5c_7771d523-4a98-4e62-9edb-74b5468f95bf","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-9px5c","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7771d523-4a98-4e62-9edb-74b5468f95bf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637","pid":11
97,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637/rootfs","created":"2023-08-17T21:19:10.642442048Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-545557_31484f02d94a16c70ea16af89113e3b3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"31484f02d94a16c70ea16af89113e3b3"},"owner":"root"},{"ociVersion":"1.0.2-d
ev","id":"7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467","pid":2463,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467/rootfs","created":"2023-08-17T21:20:03.21292477Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"aadbab333460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c2ba22e9-5bd3-4857-869e-b25ea0e1a08d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819","pid":1183,"status":"runn
ing","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819/rootfs","created":"2023-08-17T21:19:10.616796428Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-545557_a00e55d2d779747966fc8928426ad862","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a00e55d2d779747966fc8928426ad862"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aadbab3
33460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5","pid":1895,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aadbab333460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aadbab333460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5/rootfs","created":"2023-08-17T21:19:32.890552916Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"aadbab333460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c2ba22e9-5bd3-4857-869e-b25ea0e1a08d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c2ba22e9-5bd3-4857-869e-b25ea0
e1a08d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e","pid":2141,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e/rootfs","created":"2023-08-17T21:19:46.160800001Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"cf61adea89cb901ccb37d90a842d8e03a0716bce299a59f17ed2b9bcf305969d","io.kubernetes.cri.sandbox-name":"coredns-5d78c9869d-gvljn","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0261b057-521f-49ff-8046-9c5967ae60f6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b0
9f2c0af2745","pid":1249,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745/rootfs","created":"2023-08-17T21:19:10.7314897Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.7-0","io.kubernetes.cri.sandbox-id":"2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06","io.kubernetes.cri.sandbox-name":"etcd-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3a287b5aac4530715622a5e33a1287a5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cf61adea89cb901ccb37d90a842d8e03a0716bce299a59f17ed2b9bcf305969d","pid":2111,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf61adea89cb901ccb37d90a84
2d8e03a0716bce299a59f17ed2b9bcf305969d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf61adea89cb901ccb37d90a842d8e03a0716bce299a59f17ed2b9bcf305969d/rootfs","created":"2023-08-17T21:19:46.058845639Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"cf61adea89cb901ccb37d90a842d8e03a0716bce299a59f17ed2b9bcf305969d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5d78c9869d-gvljn_0261b057-521f-49ff-8046-9c5967ae60f6","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5d78c9869d-gvljn","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0261b057-521f-49ff-8046-9c5967ae60f6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823","pid":1148,"status":"running","bund
le":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823/rootfs","created":"2023-08-17T21:19:10.559179704Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-545557_b136726eb5b054d3573cf1ee701d51d8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b136726eb5b054d3573cf1ee701d51d8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":
"dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f","pid":1806,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f/rootfs","created":"2023-08-17T21:19:32.801212787Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20230511-dc714da8","io.kubernetes.cri.sandbox-id":"3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934","io.kubernetes.cri.sandbox-name":"kindnet-9px5c","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7771d523-4a98-4e62-9edb-74b5468f95bf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a","pid":1319,"status":"running","bundle":"/run
/containerd/io.containerd.runtime.v2.task/k8s.io/e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a/rootfs","created":"2023-08-17T21:19:10.936921942Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.27.4","io.kubernetes.cri.sandbox-id":"a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-545557","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a00e55d2d779747966fc8928426ad862"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834b3930e2a3a830","pid":1716,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834
b3930e2a3a830","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834b3930e2a3a830/rootfs","created":"2023-08-17T21:19:32.541884769Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834b3930e2a3a830","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-rprkp_c44cf9c6-7dd1-4380-a813-d1f4a1b9a298","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-rprkp","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c44cf9c6-7dd1-4380-a813-d1f4a1b9a298"},"owner":"root"}]
	I0817 21:20:33.373352   32705 cri.go:126] list returned 16 containers
	I0817 21:20:33.373360   32705 cri.go:129] container: {ID:1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98 Status:running}
	I0817 21:20:33.373374   32705 cri.go:135] skipping {1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98 running}: state = "running", want "paused"
	I0817 21:20:33.373382   32705 cri.go:129] container: {ID:2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70 Status:running}
	I0817 21:20:33.373388   32705 cri.go:135] skipping {2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70 running}: state = "running", want "paused"
	I0817 21:20:33.373393   32705 cri.go:129] container: {ID:2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06 Status:running}
	I0817 21:20:33.373399   32705 cri.go:131] skipping 2fff0537dc1c697ea5c5a6dacd579ecd073a252ef1e2ace677afe7eaccb9aa06 - not in ps
	I0817 21:20:33.373403   32705 cri.go:129] container: {ID:32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 Status:running}
	I0817 21:20:33.373409   32705 cri.go:135] skipping {32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 running}: state = "running", want "paused"
	I0817 21:20:33.373414   32705 cri.go:129] container: {ID:3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934 Status:running}
	I0817 21:20:33.373420   32705 cri.go:131] skipping 3e2b34e654e56f339c4409328e48fa6f37282867da1702b8b27f38cd69825934 - not in ps
	I0817 21:20:33.373424   32705 cri.go:129] container: {ID:6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637 Status:running}
	I0817 21:20:33.373430   32705 cri.go:131] skipping 6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637 - not in ps
	I0817 21:20:33.373434   32705 cri.go:129] container: {ID:7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467 Status:running}
	I0817 21:20:33.373439   32705 cri.go:135] skipping {7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467 running}: state = "running", want "paused"
	I0817 21:20:33.373444   32705 cri.go:129] container: {ID:a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819 Status:running}
	I0817 21:20:33.373450   32705 cri.go:131] skipping a4bf8cbafb685ce7703cfdaef0cd825ed1c678733421ba9e167b573117f8f819 - not in ps
	I0817 21:20:33.373454   32705 cri.go:129] container: {ID:aadbab333460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5 Status:running}
	I0817 21:20:33.373459   32705 cri.go:131] skipping aadbab333460170004fc006842c3ab87b68ba82ac729446fef790e8291b706f5 - not in ps
	I0817 21:20:33.373463   32705 cri.go:129] container: {ID:b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e Status:running}
	I0817 21:20:33.373471   32705 cri.go:135] skipping {b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e running}: state = "running", want "paused"
	I0817 21:20:33.373477   32705 cri.go:129] container: {ID:bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745 Status:running}
	I0817 21:20:33.373482   32705 cri.go:135] skipping {bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745 running}: state = "running", want "paused"
	I0817 21:20:33.373487   32705 cri.go:129] container: {ID:cf61adea89cb901ccb37d90a842d8e03a0716bce299a59f17ed2b9bcf305969d Status:running}
	I0817 21:20:33.373492   32705 cri.go:131] skipping cf61adea89cb901ccb37d90a842d8e03a0716bce299a59f17ed2b9bcf305969d - not in ps
	I0817 21:20:33.373496   32705 cri.go:129] container: {ID:da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823 Status:running}
	I0817 21:20:33.373502   32705 cri.go:131] skipping da104c5e9d918021f8c60de47293195aae1a814a4311455120016aaa39e56823 - not in ps
	I0817 21:20:33.373506   32705 cri.go:129] container: {ID:dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f Status:running}
	I0817 21:20:33.373512   32705 cri.go:135] skipping {dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f running}: state = "running", want "paused"
	I0817 21:20:33.373517   32705 cri.go:129] container: {ID:e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a Status:running}
	I0817 21:20:33.373522   32705 cri.go:135] skipping {e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a running}: state = "running", want "paused"
	I0817 21:20:33.373527   32705 cri.go:129] container: {ID:f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834b3930e2a3a830 Status:running}
	I0817 21:20:33.373533   32705 cri.go:131] skipping f5307ce8a0a7951b28d7e6aeb0761c6ae4ae040fa645cdae834b3930e2a3a830 - not in ps
	I0817 21:20:33.373581   32705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 21:20:33.384218   32705 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 21:20:33.384226   32705 kubeadm.go:636] restartCluster start
	I0817 21:20:33.384279   32705 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 21:20:33.394323   32705 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:20:33.394860   32705 kubeconfig.go:92] found "functional-545557" server: "https://192.168.49.2:8441"
	I0817 21:20:33.396506   32705 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 21:20:33.406699   32705 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-08-17 21:18:59.869413085 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-08-17 21:20:32.747872085 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0817 21:20:33.406709   32705 kubeadm.go:1128] stopping kube-system containers ...
	I0817 21:20:33.406719   32705 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 21:20:33.406780   32705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:20:33.457395   32705 cri.go:89] found id: "7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467"
	I0817 21:20:33.457407   32705 cri.go:89] found id: "b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e"
	I0817 21:20:33.457412   32705 cri.go:89] found id: "63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef"
	I0817 21:20:33.457425   32705 cri.go:89] found id: "dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f"
	I0817 21:20:33.457428   32705 cri.go:89] found id: "2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70"
	I0817 21:20:33.457432   32705 cri.go:89] found id: "32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279"
	I0817 21:20:33.457435   32705 cri.go:89] found id: "e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a"
	I0817 21:20:33.457439   32705 cri.go:89] found id: "bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745"
	I0817 21:20:33.457442   32705 cri.go:89] found id: "1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98"
	I0817 21:20:33.457447   32705 cri.go:89] found id: ""
	I0817 21:20:33.457451   32705 cri.go:234] Stopping containers: [7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467 b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e 63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f 2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70 32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745 1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98]
	I0817 21:20:33.457504   32705 ssh_runner.go:195] Run: which crictl
	I0817 21:20:33.461961   32705 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467 b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e 63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f 2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70 32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745 1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98
	I0817 21:20:38.657427   32705 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467 b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e 63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f 2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70 32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745 1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98: (5.195431437s)
	W0817 21:20:38.657479   32705 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467 b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e 63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f 2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70 32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745 1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98: Process exited with status 1
	stdout:
	7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467
	b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e
	
	stderr:
	E0817 21:20:38.654541    3437 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef\": not found" containerID="63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef"
	time="2023-08-17T21:20:38Z" level=fatal msg="stopping the container \"63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"63793e51912df3b15456b8fdcc3f7ec25df0cac443f5f992eb4359a03e6a04ef\": not found"
	I0817 21:20:38.657536   32705 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 21:20:38.735084   32705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 21:20:38.746118   32705 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug 17 21:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 17 21:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug 17 21:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 17 21:19 /etc/kubernetes/scheduler.conf
	
	I0817 21:20:38.746177   32705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0817 21:20:38.757569   32705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0817 21:20:38.768324   32705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0817 21:20:38.779182   32705 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:20:38.779237   32705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 21:20:38.789627   32705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0817 21:20:38.800666   32705 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:20:38.800718   32705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 21:20:38.810757   32705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 21:20:38.821186   32705 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 21:20:38.821199   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:20:38.889194   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:20:42.458509   32705 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.569292433s)
	I0817 21:20:42.458526   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:20:42.667326   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:20:42.745649   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:20:42.826151   32705 api_server.go:52] waiting for apiserver process to appear ...
	I0817 21:20:42.826213   32705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:20:42.851211   32705 api_server.go:72] duration metric: took 25.060509ms to wait for apiserver process to appear ...
	I0817 21:20:42.851224   32705 api_server.go:88] waiting for apiserver healthz status ...
	I0817 21:20:42.851239   32705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 21:20:42.864171   32705 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0817 21:20:42.883600   32705 api_server.go:141] control plane version: v1.27.4
	I0817 21:20:42.883616   32705 api_server.go:131] duration metric: took 32.386878ms to wait for apiserver health ...
	I0817 21:20:42.883624   32705 cni.go:84] Creating CNI manager for ""
	I0817 21:20:42.883629   32705 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:20:42.886664   32705 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 21:20:42.892117   32705 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0817 21:20:42.899615   32705 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0817 21:20:42.899626   32705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0817 21:20:42.931957   32705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 21:20:43.416360   32705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 21:20:43.427481   32705 system_pods.go:59] 8 kube-system pods found
	I0817 21:20:43.427498   32705 system_pods.go:61] "coredns-5d78c9869d-gvljn" [0261b057-521f-49ff-8046-9c5967ae60f6] Running
	I0817 21:20:43.427502   32705 system_pods.go:61] "etcd-functional-545557" [c8074a9c-97a0-47a6-8cb8-05f491da1868] Running
	I0817 21:20:43.427506   32705 system_pods.go:61] "kindnet-9px5c" [7771d523-4a98-4e62-9edb-74b5468f95bf] Running
	I0817 21:20:43.427510   32705 system_pods.go:61] "kube-apiserver-functional-545557" [ca79484f-521a-4f20-b3cf-d0b7f7387643] Running
	I0817 21:20:43.427514   32705 system_pods.go:61] "kube-controller-manager-functional-545557" [89597ce9-2cdd-43da-bcad-5ee36ccbc01d] Running
	I0817 21:20:43.427518   32705 system_pods.go:61] "kube-proxy-rprkp" [c44cf9c6-7dd1-4380-a813-d1f4a1b9a298] Running
	I0817 21:20:43.427522   32705 system_pods.go:61] "kube-scheduler-functional-545557" [84382816-6f32-43d8-aeb4-af38fb8fe4d3] Running
	I0817 21:20:43.427530   32705 system_pods.go:61] "storage-provisioner" [c2ba22e9-5bd3-4857-869e-b25ea0e1a08d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 21:20:43.427540   32705 system_pods.go:74] duration metric: took 11.166872ms to wait for pod list to return data ...
	I0817 21:20:43.427547   32705 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:20:43.431017   32705 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0817 21:20:43.431034   32705 node_conditions.go:123] node cpu capacity is 2
	I0817 21:20:43.431044   32705 node_conditions.go:105] duration metric: took 3.493316ms to run NodePressure ...
	I0817 21:20:43.431059   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:20:43.665486   32705 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0817 21:20:43.686759   32705 retry.go:31] will retry after 282.637409ms: kubelet not initialised
	I0817 21:20:43.980769   32705 kubeadm.go:787] kubelet initialised
	I0817 21:20:43.980781   32705 kubeadm.go:788] duration metric: took 315.282451ms waiting for restarted kubelet to initialise ...
	I0817 21:20:43.980797   32705 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:20:43.993427   32705 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-gvljn" in "kube-system" namespace to be "Ready" ...
	I0817 21:20:45.060657   32705 pod_ready.go:97] error getting pod "coredns-5d78c9869d-gvljn" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-gvljn": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.060673   32705 pod_ready.go:81] duration metric: took 1.067231893s waiting for pod "coredns-5d78c9869d-gvljn" in "kube-system" namespace to be "Ready" ...
	E0817 21:20:45.060684   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5d78c9869d-gvljn" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-gvljn": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.060708   32705 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-545557" in "kube-system" namespace to be "Ready" ...
	I0817 21:20:45.060893   32705 pod_ready.go:97] error getting pod "etcd-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.060900   32705 pod_ready.go:81] duration metric: took 186.188µs waiting for pod "etcd-functional-545557" in "kube-system" namespace to be "Ready" ...
	E0817 21:20:45.060906   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.060918   32705 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-545557" in "kube-system" namespace to be "Ready" ...
	I0817 21:20:45.061064   32705 pod_ready.go:97] error getting pod "kube-apiserver-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061069   32705 pod_ready.go:81] duration metric: took 146.606µs waiting for pod "kube-apiserver-functional-545557" in "kube-system" namespace to be "Ready" ...
	E0817 21:20:45.061075   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061087   32705 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-545557" in "kube-system" namespace to be "Ready" ...
	I0817 21:20:45.061228   32705 pod_ready.go:97] error getting pod "kube-controller-manager-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061233   32705 pod_ready.go:81] duration metric: took 141.117µs waiting for pod "kube-controller-manager-functional-545557" in "kube-system" namespace to be "Ready" ...
	E0817 21:20:45.061240   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061249   32705 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rprkp" in "kube-system" namespace to be "Ready" ...
	I0817 21:20:45.061448   32705 pod_ready.go:97] error getting pod "kube-proxy-rprkp" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rprkp": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061454   32705 pod_ready.go:81] duration metric: took 200.604µs waiting for pod "kube-proxy-rprkp" in "kube-system" namespace to be "Ready" ...
	E0817 21:20:45.061460   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-rprkp" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rprkp": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061470   32705 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-545557" in "kube-system" namespace to be "Ready" ...
	I0817 21:20:45.061631   32705 pod_ready.go:97] error getting pod "kube-scheduler-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061639   32705 pod_ready.go:81] duration metric: took 162.598µs waiting for pod "kube-scheduler-functional-545557" in "kube-system" namespace to be "Ready" ...
	E0817 21:20:45.061644   32705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-545557" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.061654   32705 pod_ready.go:38] duration metric: took 1.080848003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:20:45.061670   32705 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	W0817 21:20:45.075251   32705 kubeadm.go:796] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
	stdout:
	
	stderr:
	cat: /proc//oom_adj: No such file or directory
	I0817 21:20:45.075268   32705 kubeadm.go:640] restartCluster took 11.691035861s
	I0817 21:20:45.075275   32705 kubeadm.go:406] StartCluster complete in 11.776332196s
	I0817 21:20:45.075291   32705 settings.go:142] acquiring lock: {Name:mk7a5a07825601654f691495799b769adb4489ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:45.075362   32705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:20:45.076118   32705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/kubeconfig: {Name:mkf341824bbe915f226637e75b19e0928287e2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:45.077543   32705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 21:20:45.077842   32705 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:20:45.077878   32705 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 21:20:45.077943   32705 addons.go:69] Setting storage-provisioner=true in profile "functional-545557"
	I0817 21:20:45.077957   32705 addons.go:231] Setting addon storage-provisioner=true in "functional-545557"
	W0817 21:20:45.077963   32705 addons.go:240] addon storage-provisioner should already be in state true
	I0817 21:20:45.078040   32705 host.go:66] Checking if "functional-545557" exists ...
	I0817 21:20:45.078514   32705 cli_runner.go:164] Run: docker container inspect functional-545557 --format={{.State.Status}}
	W0817 21:20:45.079343   32705 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-545557" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E0817 21:20:45.079369   32705 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.079461   32705 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0817 21:20:45.084341   32705 out.go:177] * Verifying Kubernetes components...
	I0817 21:20:45.080496   32705 addons.go:69] Setting default-storageclass=true in profile "functional-545557"
	I0817 21:20:45.084421   32705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-545557"
	I0817 21:20:45.084772   32705 cli_runner.go:164] Run: docker container inspect functional-545557 --format={{.State.Status}}
	I0817 21:20:45.088321   32705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:20:45.129609   32705 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:20:45.131919   32705 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:20:45.131932   32705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 21:20:45.132000   32705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	W0817 21:20:45.138974   32705 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8441: connect: connection refused]
	I0817 21:20:45.170964   32705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
	E0817 21:20:45.319035   32705 start.go:866] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0817 21:20:45.319060   32705 start.go:291] Unable to inject {"host.minikube.internal": 192.168.49.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0817 21:20:45.319074   32705 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I0817 21:20:45.319243   32705 node_ready.go:35] waiting up to 6m0s for node "functional-545557" to be "Ready" ...
	I0817 21:20:45.319634   32705 node_ready.go:53] error getting node "functional-545557": Get "https://192.168.49.2:8441/api/v1/nodes/functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	I0817 21:20:45.319643   32705 node_ready.go:38] duration metric: took 390.336µs waiting for node "functional-545557" to be "Ready" ...
	I0817 21:20:45.321374   32705 out.go:177] 
	W0817 21:20:45.323835   32705 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-545557": Get "https://192.168.49.2:8441/api/v1/nodes/functional-545557": dial tcp 192.168.49.2:8441: connect: connection refused
	W0817 21:20:45.323988   32705 out.go:239] * 
	W0817 21:20:45.325227   32705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 21:20:45.328776   32705 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0aa7c042d6413       64aece92d6bde       7 seconds ago        Running             kube-apiserver            1                   d32a70d275515       kube-apiserver-functional-545557
	7e0f17a33e5fa       ba04bb24b9575       8 seconds ago        Running             storage-provisioner       2                   aadbab3334601       storage-provisioner
	ad092e1964ae8       97e04611ad434       8 seconds ago        Running             coredns                   1                   cf61adea89cb9       coredns-5d78c9869d-gvljn
	bfcb519b09ff2       64aece92d6bde       8 seconds ago        Exited              kube-apiserver            0                   d32a70d275515       kube-apiserver-functional-545557
	7e3284ab1f69a       ba04bb24b9575       49 seconds ago       Exited              storage-provisioner       1                   aadbab3334601       storage-provisioner
	b9329bd756ee5       97e04611ad434       About a minute ago   Exited              coredns                   0                   cf61adea89cb9       coredns-5d78c9869d-gvljn
	dff01c933c29d       b18bf71b941ba       About a minute ago   Running             kindnet-cni               0                   3e2b34e654e56       kindnet-9px5c
	2305529b95840       532e5a30e948f       About a minute ago   Running             kube-proxy                0                   f5307ce8a0a79       kube-proxy-rprkp
	e78322d1b35b2       6eb63895cb67f       About a minute ago   Running             kube-scheduler            0                   a4bf8cbafb685       kube-scheduler-functional-545557
	bfa7bd71ecb82       24bc64e911039       About a minute ago   Running             etcd                      0                   2fff0537dc1c6       etcd-functional-545557
	1a897bb2deb19       389f6f052cf83       About a minute ago   Running             kube-controller-manager   0                   da104c5e9d918       kube-controller-manager-functional-545557
	
	* 
	* ==> containerd <==
	* Aug 17 21:20:44 functional-545557 containerd[3236]: time="2023-08-17T21:20:44.128154314Z" level=info msg="StartContainer for \"7e0f17a33e5faa0fccb17b3b6a1bd0609346f3e4aa022cc7ed20f2e46440d14b\""
	Aug 17 21:20:44 functional-545557 containerd[3236]: time="2023-08-17T21:20:44.210580316Z" level=info msg="StartContainer for \"ad092e1964ae8dd8b14760be547953684a22aed7bcc004802c128444744a4fd6\" returns successfully"
	Aug 17 21:20:44 functional-545557 containerd[3236]: time="2023-08-17T21:20:44.231240495Z" level=info msg="StartContainer for \"7e0f17a33e5faa0fccb17b3b6a1bd0609346f3e4aa022cc7ed20f2e46440d14b\" returns successfully"
	Aug 17 21:20:44 functional-545557 containerd[3236]: time="2023-08-17T21:20:44.928244604Z" level=info msg="StopContainer for \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\" with timeout 1 (s)"
	Aug 17 21:20:44 functional-545557 containerd[3236]: time="2023-08-17T21:20:44.928912581Z" level=info msg="Stop container \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\" with signal terminated"
	Aug 17 21:20:44 functional-545557 containerd[3236]: time="2023-08-17T21:20:44.976507478Z" level=info msg="CreateContainer within sandbox \"d32a70d275515c00a2edf7e6a29850ef3e800f6ba0123019bc20ee0d0a247cec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.011631172Z" level=info msg="shim disconnected" id=6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.011963417Z" level=warning msg="cleaning up after shim disconnected" id=6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637 namespace=k8s.io
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.012068728Z" level=info msg="cleaning up dead shim"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.038112622Z" level=warning msg="cleanup warnings time=\"2023-08-17T21:20:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3825 runtime=io.containerd.runc.v2\n"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.040379956Z" level=info msg="CreateContainer within sandbox \"d32a70d275515c00a2edf7e6a29850ef3e800f6ba0123019bc20ee0d0a247cec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"0aa7c042d64137fda5cae0e40f0b7f8d6edc31522f7a2073a65c0dbd7c2cdf90\""
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.041149535Z" level=info msg="StartContainer for \"0aa7c042d64137fda5cae0e40f0b7f8d6edc31522f7a2073a65c0dbd7c2cdf90\""
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.071900871Z" level=info msg="shim disconnected" id=32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.072189154Z" level=warning msg="cleaning up after shim disconnected" id=32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279 namespace=k8s.io
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.072280557Z" level=info msg="cleaning up dead shim"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.145534682Z" level=warning msg="cleanup warnings time=\"2023-08-17T21:20:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3867 runtime=io.containerd.runc.v2\n"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.172455837Z" level=info msg="StopContainer for \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\" returns successfully"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.173609812Z" level=info msg="StopPodSandbox for \"6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637\""
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.174017460Z" level=info msg="Container to stop \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.174285780Z" level=info msg="TearDown network for sandbox \"6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637\" successfully"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.174388186Z" level=info msg="StopPodSandbox for \"6a4d06c2289d7d5bbca67e7f84f1be8fe9f938be03af69a20ee44e230f9d5637\" returns successfully"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.238912305Z" level=info msg="StartContainer for \"0aa7c042d64137fda5cae0e40f0b7f8d6edc31522f7a2073a65c0dbd7c2cdf90\" returns successfully"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.982733823Z" level=info msg="RemoveContainer for \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\""
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.988728859Z" level=info msg="RemoveContainer for \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\" returns successfully"
	Aug 17 21:20:45 functional-545557 containerd[3236]: time="2023-08-17T21:20:45.989659822Z" level=error msg="ContainerStatus for \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\": not found"
	
	* 
	* ==> coredns [ad092e1964ae8dd8b14760be547953684a22aed7bcc004802c128444744a4fd6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41357 - 29451 "HINFO IN 7262825344432124189.578214572530978332. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.06043919s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Unexpected watch close - watch lasted less than a second and no items received
	
	* 
	* ==> coredns [b9329bd756ee56abe7ad5d276a60a9b9c211d6a0c847eccbb89c47a66f4c837e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55108 - 22279 "HINFO IN 1190291433924974191.7624145215812639463. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.075802328s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-545557
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-545557
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=functional-545557
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T21_19_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:19:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-545557
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 21:20:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:20:43 +0000   Thu, 17 Aug 2023 21:19:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:20:43 +0000   Thu, 17 Aug 2023 21:19:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:20:43 +0000   Thu, 17 Aug 2023 21:19:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:20:43 +0000   Thu, 17 Aug 2023 21:20:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-545557
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	System Info:
	  Machine ID:                 32a6d466b0d2406d9b1adac220bb80a0
	  System UUID:                a6e31771-80c9-484e-8995-d9d277994702
	  Boot ID:                    da56fcbe-e8d4-44e4-8927-1925d04822e5
	  Kernel Version:             5.15.0-1041-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-gvljn                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     81s
	  kube-system                 etcd-functional-545557                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         94s
	  kube-system                 kindnet-9px5c                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      81s
	  kube-system                 kube-apiserver-functional-545557             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kube-controller-manager-functional-545557    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-proxy-rprkp                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-scheduler-functional-545557             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 80s                  kube-proxy       
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node functional-545557 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node functional-545557 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x7 over 102s)  kubelet          Node functional-545557 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 95s                  kubelet          Starting kubelet.
	  Normal  NodeNotReady             94s                  kubelet          Node functional-545557 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    94s                  kubelet          Node functional-545557 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s                  kubelet          Node functional-545557 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                94s                  kubelet          Node functional-545557 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  94s                  kubelet          Node functional-545557 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           82s                  node-controller  Node functional-545557 event: Registered Node functional-545557 in Controller
	  Normal  Starting                 10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s                  kubelet          Node functional-545557 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s                  kubelet          Node functional-545557 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s                  kubelet          Node functional-545557 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             10s                  kubelet          Node functional-545557 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9s                   kubelet          Node functional-545557 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug17 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015730] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.269498] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.619452] kauditd_printk_skb: 26 callbacks suppressed
	
	* 
	* ==> etcd [bfa7bd71ecb827d781e51f461580bf9a615faa2156f070dc583b09f2c0af2745] <==
	* {"level":"info","ts":"2023-08-17T21:19:10.854Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-17T21:19:10.855Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-17T21:19:10.855Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-17T21:19:10.856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-08-17T21:19:10.856Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-08-17T21:19:10.856Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-17T21:19:10.857Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-08-17T21:19:11.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-17T21:19:11.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-17T21:19:11.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-08-17T21:19:11.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-08-17T21:19:11.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-08-17T21:19:11.142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-08-17T21:19:11.142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-08-17T21:19:11.143Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-545557 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-17T21:19:11.144Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T21:19:11.145Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-08-17T21:19:11.145Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:19:11.145Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T21:19:11.151Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:19:11.152Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:19:11.152Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:19:11.153Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-17T21:19:11.171Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-17T21:19:11.199Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  21:20:53 up  1:03,  0 users,  load average: 0.75, 0.96, 0.56
	Linux functional-545557 5.15.0-1041-aws #46~20.04.1-Ubuntu SMP Wed Jul 19 15:39:29 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [dff01c933c29dc38bc9602edb03c69b580cd1cdd1c661fd1a75638e78ae4838f] <==
	* I0817 21:19:32.888608       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0817 21:19:32.888676       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0817 21:19:32.888793       1 main.go:116] setting mtu 1500 for CNI 
	I0817 21:19:32.888841       1 main.go:146] kindnetd IP family: "ipv4"
	I0817 21:19:32.888877       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0817 21:19:33.384810       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:19:33.384844       1 main.go:227] handling current node
	I0817 21:19:43.401699       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:19:43.401732       1 main.go:227] handling current node
	I0817 21:19:53.412784       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:19:53.412817       1 main.go:227] handling current node
	I0817 21:20:03.417246       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:20:03.417275       1 main.go:227] handling current node
	I0817 21:20:13.428244       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:20:13.428274       1 main.go:227] handling current node
	I0817 21:20:23.438196       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:20:23.438221       1 main.go:227] handling current node
	I0817 21:20:33.450125       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:20:33.450157       1 main.go:227] handling current node
	I0817 21:20:43.462797       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:20:43.462822       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [0aa7c042d64137fda5cae0e40f0b7f8d6edc31522f7a2073a65c0dbd7c2cdf90] <==
	* I0817 21:20:47.825969       1 naming_controller.go:291] Starting NamingConditionController
	I0817 21:20:47.825978       1 establishing_controller.go:76] Starting EstablishingController
	I0817 21:20:47.825986       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0817 21:20:47.825993       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0817 21:20:47.826001       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0817 21:20:47.839895       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0817 21:20:48.078273       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0817 21:20:48.529456       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 21:20:48.605279       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0817 21:20:48.606062       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0817 21:20:48.617991       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 21:20:48.618353       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 21:20:48.667074       1 shared_informer.go:318] Caches are synced for configmaps
	I0817 21:20:48.669335       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0817 21:20:48.669363       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0817 21:20:48.669342       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0817 21:20:48.678710       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0817 21:20:48.679073       1 aggregator.go:152] initial CRD sync complete...
	I0817 21:20:48.679129       1 autoregister_controller.go:141] Starting autoregister controller
	I0817 21:20:48.679169       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0817 21:20:48.679227       1 cache.go:39] Caches are synced for autoregister controller
	I0817 21:20:48.706335       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0817 21:20:48.827678       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0817 21:20:49.395293       1 controller.go:624] quota admission added evaluator for: endpoints
	I0817 21:20:49.397722       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [bfcb519b09ff2090f2611d5a2f2cd5c0400539fd38faa3c63178b9c1150674e3] <==
	* I0817 21:20:43.949289       1 server.go:553] external host was not specified, using 192.168.49.2
	I0817 21:20:43.950453       1 server.go:166] Version: v1.27.4
	I0817 21:20:43.950570       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0817 21:20:43.951061       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [1a897bb2deb19fae0a5500f264a1b93b5dc35e06c406263082848ddb6749fd98] <==
	* E0817 21:20:48.415279       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:59550->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415349       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:59348->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415414       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59250->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415447       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.VolumeAttachment: unknown (get volumeattachments.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59286->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415515       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:59780->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415588       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Secret: unknown (get secrets) - error from a previous attempt: read tcp 192.168.49.2:59208->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415624       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: unknown (get runtimeclasses.node.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59750->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.415689       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59688->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.447252       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:59528->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.447392       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PriorityClass: unknown (get priorityclasses.scheduling.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59226->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.447592       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:59322->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.447679       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.IngressClass: unknown (get ingressclasses.networking.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59362->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.447891       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ResourceQuota: unknown (get resourcequotas) - error from a previous attempt: read tcp 192.168.49.2:59380->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448110       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59398->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448265       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CertificateSigningRequest: unknown (get certificatesigningrequests.certificates.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59312->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448336       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Ingress: unknown (get ingresses.networking.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59450->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448392       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.NetworkPolicy: unknown (get networkpolicies.networking.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59452->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448574       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59462->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448637       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RoleBinding: unknown (get rolebindings.rbac.authorization.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59476->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448681       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:59484->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448833       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:59500->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448878       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodTemplate: unknown (get podtemplates) - error from a previous attempt: read tcp 192.168.49.2:59512->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.448910       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v2.HorizontalPodAutoscaler: unknown (get horizontalpodautoscalers.autoscaling) - error from a previous attempt: read tcp 192.168.49.2:59514->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.449090       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ServiceAccount: unknown (get serviceaccounts) - error from a previous attempt: read tcp 192.168.49.2:59520->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.452310       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59668->192.168.49.2:8441: read: connection reset by peer
	
	* 
	* ==> kube-proxy [2305529b958404d3124c3d000ba76b44227ad09f7523991ec795d4a592821f70] <==
	* I0817 21:19:32.763554       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0817 21:19:32.763767       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0817 21:19:32.763848       1 server_others.go:554] "Using iptables proxy"
	I0817 21:19:32.807748       1 server_others.go:192] "Using iptables Proxier"
	I0817 21:19:32.807791       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0817 21:19:32.807800       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0817 21:19:32.807813       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0817 21:19:32.807831       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 21:19:32.808524       1 server.go:658] "Version info" version="v1.27.4"
	I0817 21:19:32.808536       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 21:19:32.812552       1 config.go:188] "Starting service config controller"
	I0817 21:19:32.812574       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 21:19:32.812598       1 config.go:97] "Starting endpoint slice config controller"
	I0817 21:19:32.812603       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 21:19:32.821280       1 config.go:315] "Starting node config controller"
	I0817 21:19:32.821319       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 21:19:32.913484       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0817 21:19:32.913531       1 shared_informer.go:318] Caches are synced for service config
	I0817 21:19:32.921945       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e78322d1b35b28f8ff6cc0e37fbb662077dbd4ed5dec31d3a9e254114d38724a] <==
	* E0817 21:19:15.020818       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0817 21:19:15.020523       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 21:19:15.021031       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0817 21:19:15.020563       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0817 21:19:15.022898       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0817 21:19:15.020466       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 21:19:15.023091       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0817 21:19:15.023427       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 21:19:15.023575       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0817 21:19:16.306576       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0817 21:20:48.390183       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:59202->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.442986       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:59258->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.443262       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:59268->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.443443       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59294->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.443602       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:59300->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.443769       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:59332->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.443929       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:59344->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.444089       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:59350->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.444248       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:59218->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.444408       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:59376->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.444573       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:59238->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.444737       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59418->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.444888       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:59396->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.447779       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59404->192.168.49.2:8441: read: connection reset by peer
	E0817 21:20:48.453109       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59432->192.168.49.2:8441: read: connection reset by peer
	
	* 
	* ==> kubelet <==
	* Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/cpuset/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/systemd/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/net_cls,net_prio/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/perf_event/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/unified/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/memory/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/blkio/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/net_cls,net_prio/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/freezer/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: time="2023-08-17T21:20:44Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/devices/kubepods/burstable/pod31484f02d94a16c70ea16af89113e3b3/32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279: device or resource busy"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: I0817 21:20:44.973725    3566 scope.go:115] "RemoveContainer" containerID="bfcb519b09ff2090f2611d5a2f2cd5c0400539fd38faa3c63178b9c1150674e3"
	Aug 17 21:20:44 functional-545557 kubelet[3566]: E0817 21:20:44.997177    3566 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-545557.177c487918bcaef6", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-545557", UID:"31484f02d94a16c70ea16af89113e3b3", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Killing", Message:"Stopping container kube-apiserver", Source:v1.EventSource{Component:"kubelet", Hos
t:"functional-545557"}, FirstTimestamp:time.Date(2023, time.August, 17, 21, 20, 44, 927725302, time.Local), LastTimestamp:time.Date(2023, time.August, 17, 21, 20, 44, 927725302, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": read tcp 192.168.49.2:59180->192.168.49.2:8441: read: connection reset by peer'(may retry after sleeping)
	Aug 17 21:20:45 functional-545557 kubelet[3566]: I0817 21:20:45.980660    3566 scope.go:115] "RemoveContainer" containerID="32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279"
	Aug 17 21:20:45 functional-545557 kubelet[3566]: I0817 21:20:45.989276    3566 scope.go:115] "RemoveContainer" containerID="32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279"
	Aug 17 21:20:45 functional-545557 kubelet[3566]: E0817 21:20:45.990018    3566 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\": not found" containerID="32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279"
	Aug 17 21:20:45 functional-545557 kubelet[3566]: I0817 21:20:45.990144    3566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279} err="failed to get container status \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\": rpc error: code = NotFound desc = an error occurred when try to find container \"32defc4cba5123b96a7769d6baac655e3eef771543100a5600f2c0cf46286279\": not found"
	Aug 17 21:20:46 functional-545557 kubelet[3566]: I0817 21:20:46.927532    3566 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=31484f02d94a16c70ea16af89113e3b3 path="/var/lib/kubelet/pods/31484f02d94a16c70ea16af89113e3b3/volumes"
	Aug 17 21:20:48 functional-545557 kubelet[3566]: E0817 21:20:48.348014    3566 reflector.go:148] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:59804->192.168.49.2:8441: read: connection reset by peer
	Aug 17 21:20:48 functional-545557 kubelet[3566]: E0817 21:20:48.348091    3566 reflector.go:148] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:59832->192.168.49.2:8441: read: connection reset by peer
	Aug 17 21:20:48 functional-545557 kubelet[3566]: E0817 21:20:48.383464    3566 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:59902->192.168.49.2:8441: read: connection reset by peer
	Aug 17 21:20:49 functional-545557 kubelet[3566]: I0817 21:20:49.147102    3566 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Aug 17 21:20:49 functional-545557 kubelet[3566]: I0817 21:20:49.534766    3566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-545557" podStartSLOduration=5.534707699 podCreationTimestamp="2023-08-17 21:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-17 21:20:44.075883843 +0000 UTC m=+1.412825071" watchObservedRunningTime="2023-08-17 21:20:49.534707699 +0000 UTC m=+6.871648968"
	
	* 
	* ==> storage-provisioner [7e0f17a33e5faa0fccb17b3b6a1bd0609346f3e4aa022cc7ed20f2e46440d14b] <==
	* I0817 21:20:44.241508       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 21:20:44.255255       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 21:20:44.255487       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [7e3284ab1f69a13e7ac24eacf8eadd08466bfbcfe645580be9cdaf9a4878b467] <==
	* I0817 21:20:03.249705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 21:20:03.264454       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 21:20:03.264766       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 21:20:03.277358       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 21:20:03.278074       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-545557_a12882bc-013e-42bd-9b5c-81d65d67df19!
	I0817 21:20:03.278647       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0829c91a-ee07-421e-8127-a4a6a2ff64cb", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-545557_a12882bc-013e-42bd-9b5c-81d65d67df19 became leader
	I0817 21:20:03.379080       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-545557_a12882bc-013e-42bd-9b5c-81d65d67df19!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-545557 -n functional-545557
helpers_test.go:261: (dbg) Run:  kubectl --context functional-545557 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image load --daemon gcr.io/google-containers/addon-resizer:functional-545557 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-545557 image load --daemon gcr.io/google-containers/addon-resizer:functional-545557 --alsologtostderr: (3.678820728s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-545557" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image load --daemon gcr.io/google-containers/addon-resizer:functional-545557 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-545557 image load --daemon gcr.io/google-containers/addon-resizer:functional-545557 --alsologtostderr: (3.283142357s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-545557" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.671481898s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-545557
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image load --daemon gcr.io/google-containers/addon-resizer:functional-545557 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-545557 image load --daemon gcr.io/google-containers/addon-resizer:functional-545557 --alsologtostderr: (3.417551963s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-545557" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image save gcr.io/google-containers/addon-resizer:functional-545557 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0817 21:21:20.696647   36805 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:21:20.698463   36805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:21:20.698505   36805 out.go:309] Setting ErrFile to fd 2...
	I0817 21:21:20.698525   36805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:21:20.698919   36805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	I0817 21:21:20.699622   36805 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:21:20.699910   36805 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:21:20.700506   36805 cli_runner.go:164] Run: docker container inspect functional-545557 --format={{.State.Status}}
	I0817 21:21:20.733812   36805 ssh_runner.go:195] Run: systemctl --version
	I0817 21:21:20.733909   36805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
	I0817 21:21:20.801641   36805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
	I0817 21:21:20.899164   36805 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0817 21:21:20.899214   36805 cache_images.go:254] Failed to load cached images for profile functional-545557. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0817 21:21:20.899244   36805 cache_images.go:262] succeeded pushing to: 
	I0817 21:21:20.899248   36805 cache_images.go:263] failed pushing to: functional-545557

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.33s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (49.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-679314 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-679314 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.294000463s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-679314 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-679314 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c6056713-24ad-4524-94eb-492322b4b15f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c6056713-24ad-4524-94eb-492322b4b15f] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.022119459s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-679314 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-679314 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-679314 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.017964148s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-679314 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-679314 addons disable ingress-dns --alsologtostderr -v=1: (6.214485098s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-679314 addons disable ingress --alsologtostderr -v=1
E0817 21:24:24.657659    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-679314 addons disable ingress --alsologtostderr -v=1: (7.527313769s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-679314
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-679314:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3d8e81950c365d8bb96f34103a1c21dac32ad86b75206af625296e86c1152a25",
	        "Created": "2023-08-17T21:22:19.819294354Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 41343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-17T21:22:20.125663625Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/3d8e81950c365d8bb96f34103a1c21dac32ad86b75206af625296e86c1152a25/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3d8e81950c365d8bb96f34103a1c21dac32ad86b75206af625296e86c1152a25/hostname",
	        "HostsPath": "/var/lib/docker/containers/3d8e81950c365d8bb96f34103a1c21dac32ad86b75206af625296e86c1152a25/hosts",
	        "LogPath": "/var/lib/docker/containers/3d8e81950c365d8bb96f34103a1c21dac32ad86b75206af625296e86c1152a25/3d8e81950c365d8bb96f34103a1c21dac32ad86b75206af625296e86c1152a25-json.log",
	        "Name": "/ingress-addon-legacy-679314",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-679314:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-679314",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/22e53939b286f711e91b3ce7a7f6eb2801b2d5cfb05a10eb5e42bc53f9832834-init/diff:/var/lib/docker/overlay2/6e6597fd944d5f98ecbe7d9c5301a949ba6526f8982591cdfcbe3d11f113be4a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22e53939b286f711e91b3ce7a7f6eb2801b2d5cfb05a10eb5e42bc53f9832834/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22e53939b286f711e91b3ce7a7f6eb2801b2d5cfb05a10eb5e42bc53f9832834/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22e53939b286f711e91b3ce7a7f6eb2801b2d5cfb05a10eb5e42bc53f9832834/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-679314",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-679314/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-679314",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-679314",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-679314",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bbc131e9aa4a207bf4c0dc296a5574e8c4b129839191da7b8a0045a4896db2f6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bbc131e9aa4a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-679314": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3d8e81950c36",
	                        "ingress-addon-legacy-679314"
	                    ],
	                    "NetworkID": "a77a9af18c34295c84fa16c6ead1d613e53492df8f452e2d9bab8cad40c0256d",
	                    "EndpointID": "40a7f676c70cbd18fab87249b959e4b7039afd453dda8a49585b531edaf94070",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-679314 -n ingress-addon-legacy-679314
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-679314 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-679314 logs -n 25: (1.355026807s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-545557                                                   | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3790486554/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-545557                                                   | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3790486554/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-545557 ssh findmnt                                          | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-545557 ssh findmnt                                          | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-545557 ssh findmnt                                          | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-545557 ssh findmnt                                          | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-545557                                                   | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-545557                                                      | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-545557                                                      | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-545557                                                      | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-545557                                                      | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-545557                                                      | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-545557 ssh pgrep                                            | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-545557 image build -t                                       | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	|                | localhost/my-image:functional-545557                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-545557                                                      | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-545557                                                      | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-545557 image ls                                             | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	| delete         | -p functional-545557                                                   | functional-545557           | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:22 UTC |
	| start          | -p ingress-addon-legacy-679314                                         | ingress-addon-legacy-679314 | jenkins | v1.31.2 | 17 Aug 23 21:22 UTC | 17 Aug 23 21:23 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=containerd                                         |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-679314                                            | ingress-addon-legacy-679314 | jenkins | v1.31.2 | 17 Aug 23 21:23 UTC | 17 Aug 23 21:23 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-679314                                            | ingress-addon-legacy-679314 | jenkins | v1.31.2 | 17 Aug 23 21:23 UTC | 17 Aug 23 21:23 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-679314                                            | ingress-addon-legacy-679314 | jenkins | v1.31.2 | 17 Aug 23 21:23 UTC | 17 Aug 23 21:23 UTC |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-679314 ip                                         | ingress-addon-legacy-679314 | jenkins | v1.31.2 | 17 Aug 23 21:23 UTC | 17 Aug 23 21:23 UTC |
	| addons         | ingress-addon-legacy-679314                                            | ingress-addon-legacy-679314 | jenkins | v1.31.2 | 17 Aug 23 21:24 UTC | 17 Aug 23 21:24 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-679314                                            | ingress-addon-legacy-679314 | jenkins | v1.31.2 | 17 Aug 23 21:24 UTC | 17 Aug 23 21:24 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:22:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:22:02.067813   40897 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:22:02.067965   40897 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:22:02.067975   40897 out.go:309] Setting ErrFile to fd 2...
	I0817 21:22:02.067981   40897 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:22:02.068219   40897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	I0817 21:22:02.068612   40897 out.go:303] Setting JSON to false
	I0817 21:22:02.069647   40897 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3861,"bootTime":1692303461,"procs":361,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0817 21:22:02.069717   40897 start.go:138] virtualization:  
	I0817 21:22:02.072496   40897 out.go:177] * [ingress-addon-legacy-679314] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0817 21:22:02.075459   40897 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:22:02.077289   40897 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:22:02.075745   40897 notify.go:220] Checking for updates...
	I0817 21:22:02.081259   40897 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:22:02.083249   40897 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	I0817 21:22:02.085168   40897 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 21:22:02.087025   40897 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:22:02.088920   40897 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:22:02.112979   40897 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:22:02.113079   40897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:22:02.205536   40897 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2023-08-17 21:22:02.194070167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:22:02.205647   40897 docker.go:294] overlay module found
	I0817 21:22:02.207833   40897 out.go:177] * Using the docker driver based on user configuration
	I0817 21:22:02.210150   40897 start.go:298] selected driver: docker
	I0817 21:22:02.210165   40897 start.go:902] validating driver "docker" against <nil>
	I0817 21:22:02.210178   40897 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:22:02.210912   40897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:22:02.281813   40897 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2023-08-17 21:22:02.272415765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:22:02.281986   40897 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0817 21:22:02.282204   40897 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 21:22:02.284156   40897 out.go:177] * Using Docker driver with root privileges
	I0817 21:22:02.286238   40897 cni.go:84] Creating CNI manager for ""
	I0817 21:22:02.286257   40897 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:22:02.286267   40897 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 21:22:02.286289   40897 start_flags.go:319] config:
	{Name:ingress-addon-legacy-679314 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-679314 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:22:02.289521   40897 out.go:177] * Starting control plane node ingress-addon-legacy-679314 in cluster ingress-addon-legacy-679314
	I0817 21:22:02.291433   40897 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0817 21:22:02.293150   40897 out.go:177] * Pulling base image ...
	I0817 21:22:02.294863   40897 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0817 21:22:02.294925   40897 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:22:02.316605   40897 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0817 21:22:02.316630   40897 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0817 21:22:02.372882   40897 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0817 21:22:02.372915   40897 cache.go:57] Caching tarball of preloaded images
	I0817 21:22:02.373071   40897 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0817 21:22:02.375085   40897 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0817 21:22:02.376769   40897 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:22:02.492190   40897 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0817 21:22:12.013005   40897 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:22:12.013120   40897 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:22:13.117422   40897 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I0817 21:22:13.117789   40897 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/config.json ...
	I0817 21:22:13.117823   40897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/config.json: {Name:mkb0ee14f614402f8f4c11efb81d791b13d8beec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:22:13.118005   40897 cache.go:195] Successfully downloaded all kic artifacts
	I0817 21:22:13.118049   40897 start.go:365] acquiring machines lock for ingress-addon-legacy-679314: {Name:mk4b017e7587622c549ac8ea1d52b0356f0c628b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:22:13.118121   40897 start.go:369] acquired machines lock for "ingress-addon-legacy-679314" in 56.434µs
	I0817 21:22:13.118147   40897 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-679314 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-679314 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0817 21:22:13.118214   40897 start.go:125] createHost starting for "" (driver="docker")
	I0817 21:22:13.120834   40897 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0817 21:22:13.121030   40897 start.go:159] libmachine.API.Create for "ingress-addon-legacy-679314" (driver="docker")
	I0817 21:22:13.121088   40897 client.go:168] LocalClient.Create starting
	I0817 21:22:13.121182   40897 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem
	I0817 21:22:13.121220   40897 main.go:141] libmachine: Decoding PEM data...
	I0817 21:22:13.121244   40897 main.go:141] libmachine: Parsing certificate...
	I0817 21:22:13.121300   40897 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem
	I0817 21:22:13.121322   40897 main.go:141] libmachine: Decoding PEM data...
	I0817 21:22:13.121336   40897 main.go:141] libmachine: Parsing certificate...
	I0817 21:22:13.121693   40897 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-679314 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 21:22:13.138655   40897 cli_runner.go:211] docker network inspect ingress-addon-legacy-679314 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 21:22:13.138741   40897 network_create.go:281] running [docker network inspect ingress-addon-legacy-679314] to gather additional debugging logs...
	I0817 21:22:13.138761   40897 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-679314
	W0817 21:22:13.155228   40897 cli_runner.go:211] docker network inspect ingress-addon-legacy-679314 returned with exit code 1
	I0817 21:22:13.155257   40897 network_create.go:284] error running [docker network inspect ingress-addon-legacy-679314]: docker network inspect ingress-addon-legacy-679314: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-679314 not found
	I0817 21:22:13.155271   40897 network_create.go:286] output of [docker network inspect ingress-addon-legacy-679314]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-679314 not found
	
	** /stderr **
	I0817 21:22:13.155333   40897 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:22:13.173546   40897 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000ffa900}
	I0817 21:22:13.173581   40897 network_create.go:123] attempt to create docker network ingress-addon-legacy-679314 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 21:22:13.173641   40897 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-679314 ingress-addon-legacy-679314
	I0817 21:22:13.241205   40897 network_create.go:107] docker network ingress-addon-legacy-679314 192.168.49.0/24 created
	I0817 21:22:13.241236   40897 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-679314" container
	I0817 21:22:13.241313   40897 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0817 21:22:13.260372   40897 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-679314 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-679314 --label created_by.minikube.sigs.k8s.io=true
	I0817 21:22:13.279978   40897 oci.go:103] Successfully created a docker volume ingress-addon-legacy-679314
	I0817 21:22:13.280063   40897 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-679314-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-679314 --entrypoint /usr/bin/test -v ingress-addon-legacy-679314:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0817 21:22:14.762989   40897 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-679314-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-679314 --entrypoint /usr/bin/test -v ingress-addon-legacy-679314:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.482866903s)
	I0817 21:22:14.763017   40897 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-679314
	I0817 21:22:14.763036   40897 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0817 21:22:14.763053   40897 kic.go:190] Starting extracting preloaded images to volume ...
	I0817 21:22:14.763137   40897 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-679314:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 21:22:19.723897   40897 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-679314:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.960703724s)
	I0817 21:22:19.723929   40897 kic.go:199] duration metric: took 4.960872 seconds to extract preloaded images to volume
	W0817 21:22:19.724076   40897 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0817 21:22:19.724185   40897 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 21:22:19.803469   40897 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-679314 --name ingress-addon-legacy-679314 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-679314 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-679314 --network ingress-addon-legacy-679314 --ip 192.168.49.2 --volume ingress-addon-legacy-679314:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0817 21:22:20.134119   40897 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-679314 --format={{.State.Running}}
	I0817 21:22:20.155782   40897 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-679314 --format={{.State.Status}}
	I0817 21:22:20.181733   40897 cli_runner.go:164] Run: docker exec ingress-addon-legacy-679314 stat /var/lib/dpkg/alternatives/iptables
	I0817 21:22:20.256025   40897 oci.go:144] the created container "ingress-addon-legacy-679314" has a running status.
	I0817 21:22:20.256051   40897 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/ingress-addon-legacy-679314/id_rsa...
	I0817 21:22:20.572404   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/ingress-addon-legacy-679314/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0817 21:22:20.572505   40897 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16865-2431/.minikube/machines/ingress-addon-legacy-679314/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 21:22:20.603674   40897 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-679314 --format={{.State.Status}}
	I0817 21:22:20.638089   40897 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 21:22:20.638107   40897 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-679314 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 21:22:20.729904   40897 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-679314 --format={{.State.Status}}
	I0817 21:22:20.758962   40897 machine.go:88] provisioning docker machine ...
	I0817 21:22:20.758988   40897 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-679314"
	I0817 21:22:20.759053   40897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-679314
	I0817 21:22:20.794187   40897 main.go:141] libmachine: Using SSH client type: native
	I0817 21:22:20.794648   40897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0817 21:22:20.794662   40897 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-679314 && echo "ingress-addon-legacy-679314" | sudo tee /etc/hostname
	I0817 21:22:20.795273   40897 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46416->127.0.0.1:32792: read: connection reset by peer
	I0817 21:22:23.941598   40897 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-679314
	
	I0817 21:22:23.941676   40897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-679314
	I0817 21:22:23.964733   40897 main.go:141] libmachine: Using SSH client type: native
	I0817 21:22:23.965168   40897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0817 21:22:23.965192   40897 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-679314' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-679314/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-679314' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:22:24.104273   40897 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:22:24.104309   40897 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-2431/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-2431/.minikube}
	I0817 21:22:24.104337   40897 ubuntu.go:177] setting up certificates
	I0817 21:22:24.104345   40897 provision.go:83] configureAuth start
	I0817 21:22:24.104420   40897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-679314
	I0817 21:22:24.123766   40897 provision.go:138] copyHostCerts
	I0817 21:22:24.123812   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem
	I0817 21:22:24.123844   40897 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem, removing ...
	I0817 21:22:24.123855   40897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem
	I0817 21:22:24.123936   40897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem (1078 bytes)
	I0817 21:22:24.124018   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem
	I0817 21:22:24.124040   40897 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem, removing ...
	I0817 21:22:24.124050   40897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem
	I0817 21:22:24.124075   40897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem (1123 bytes)
	I0817 21:22:24.124118   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem
	I0817 21:22:24.124139   40897 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem, removing ...
	I0817 21:22:24.124146   40897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem
	I0817 21:22:24.124170   40897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem (1675 bytes)
	I0817 21:22:24.124276   40897 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-679314 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-679314]
	I0817 21:22:24.623178   40897 provision.go:172] copyRemoteCerts
	I0817 21:22:24.623246   40897 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:22:24.623290   40897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-679314
	I0817 21:22:24.645199   40897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/ingress-addon-legacy-679314/id_rsa Username:docker}
	I0817 21:22:24.741080   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0817 21:22:24.741140   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:22:24.770114   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0817 21:22:24.770175   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0817 21:22:24.798144   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0817 21:22:24.798205   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 21:22:24.825547   40897 provision.go:86] duration metric: configureAuth took 721.183686ms
	I0817 21:22:24.825572   40897 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:22:24.825762   40897 config.go:182] Loaded profile config "ingress-addon-legacy-679314": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0817 21:22:24.825775   40897 machine.go:91] provisioned docker machine in 4.066797551s
	I0817 21:22:24.825780   40897 client.go:171] LocalClient.Create took 11.704682828s
	I0817 21:22:24.825798   40897 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-679314" took 11.704766626s
	I0817 21:22:24.825812   40897 start.go:300] post-start starting for "ingress-addon-legacy-679314" (driver="docker")
	I0817 21:22:24.825820   40897 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:22:24.825875   40897 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:22:24.825920   40897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-679314
	I0817 21:22:24.843348   40897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/ingress-addon-legacy-679314/id_rsa Username:docker}
	I0817 21:22:24.937128   40897 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:22:24.941514   40897 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:22:24.941553   40897 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:22:24.941568   40897 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:22:24.941579   40897 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0817 21:22:24.941589   40897 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/addons for local assets ...
	I0817 21:22:24.941647   40897 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/files for local assets ...
	I0817 21:22:24.941738   40897 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem -> 77452.pem in /etc/ssl/certs
	I0817 21:22:24.941749   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem -> /etc/ssl/certs/77452.pem
	I0817 21:22:24.941854   40897 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:22:24.952044   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem --> /etc/ssl/certs/77452.pem (1708 bytes)
	I0817 21:22:24.981998   40897 start.go:303] post-start completed in 156.172382ms
	I0817 21:22:24.982366   40897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-679314
	I0817 21:22:24.999576   40897 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/config.json ...
	I0817 21:22:24.999845   40897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:22:24.999897   40897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-679314
	I0817 21:22:25.022251   40897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/ingress-addon-legacy-679314/id_rsa Username:docker}
	I0817 21:22:25.112895   40897 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:22:25.118878   40897 start.go:128] duration metric: createHost completed in 12.000649867s
	I0817 21:22:25.118899   40897 start.go:83] releasing machines lock for "ingress-addon-legacy-679314", held for 12.000764163s
	I0817 21:22:25.118969   40897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-679314
	I0817 21:22:25.136846   40897 ssh_runner.go:195] Run: cat /version.json
	I0817 21:22:25.136853   40897 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:22:25.136905   40897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-679314
	I0817 21:22:25.136940   40897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-679314
	I0817 21:22:25.158343   40897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/ingress-addon-legacy-679314/id_rsa Username:docker}
	I0817 21:22:25.172383   40897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/ingress-addon-legacy-679314/id_rsa Username:docker}
	I0817 21:22:25.247082   40897 ssh_runner.go:195] Run: systemctl --version
	I0817 21:22:25.385889   40897 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:22:25.391201   40897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0817 21:22:25.420060   40897 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0817 21:22:25.420192   40897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:22:25.452791   40897 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0817 21:22:25.452833   40897 start.go:466] detecting cgroup driver to use...
	I0817 21:22:25.452865   40897 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0817 21:22:25.452920   40897 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0817 21:22:25.467479   40897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0817 21:22:25.480396   40897 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:22:25.480488   40897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:22:25.495950   40897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:22:25.511612   40897 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:22:25.597524   40897 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:22:25.705863   40897 docker.go:212] disabling docker service ...
	I0817 21:22:25.705925   40897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:22:25.728038   40897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:22:25.741991   40897 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:22:25.844041   40897 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:22:25.940395   40897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:22:25.954170   40897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:22:25.977142   40897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0817 21:22:25.990512   40897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0817 21:22:26.003829   40897 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0817 21:22:26.003906   40897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0817 21:22:26.017754   40897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:22:26.030291   40897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0817 21:22:26.042117   40897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:22:26.059225   40897 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:22:26.070287   40897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0817 21:22:26.081867   40897 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:22:26.092199   40897 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:22:26.102366   40897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:22:26.188651   40897 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0817 21:22:26.277780   40897 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 21:22:26.277924   40897 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0817 21:22:26.283148   40897 start.go:534] Will wait 60s for crictl version
	I0817 21:22:26.283261   40897 ssh_runner.go:195] Run: which crictl
	I0817 21:22:26.287923   40897 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:22:26.337985   40897 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0817 21:22:26.338102   40897 ssh_runner.go:195] Run: containerd --version
	I0817 21:22:26.365230   40897 ssh_runner.go:195] Run: containerd --version
	I0817 21:22:26.397301   40897 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.21 ...
	I0817 21:22:26.399285   40897 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-679314 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:22:26.416782   40897 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 21:22:26.421149   40897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:22:26.434085   40897 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0817 21:22:26.434153   40897 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:22:26.477609   40897 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0817 21:22:26.477674   40897 ssh_runner.go:195] Run: which lz4
	I0817 21:22:26.482192   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0817 21:22:26.482279   40897 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 21:22:26.486511   40897 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 21:22:26.486542   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I0817 21:22:28.672592   40897 containerd.go:547] Took 2.190346 seconds to copy over tarball
	I0817 21:22:28.672713   40897 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 21:22:31.357834   40897 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.68506782s)
	I0817 21:22:31.357857   40897 containerd.go:554] Took 2.685204 seconds to extract the tarball
	I0817 21:22:31.357867   40897 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 21:22:31.444774   40897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:22:31.532950   40897 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0817 21:22:31.619731   40897 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:22:31.680554   40897 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0817 21:22:31.680575   40897 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 21:22:31.680631   40897 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:22:31.680808   40897 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:22:31.680889   40897 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:22:31.680963   40897 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0817 21:22:31.681036   40897 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:22:31.681113   40897 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0817 21:22:31.681179   40897 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0817 21:22:31.681242   40897 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0817 21:22:31.682130   40897 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:22:31.682575   40897 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:22:31.682831   40897 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:22:31.682893   40897 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0817 21:22:31.682940   40897 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0817 21:22:31.682985   40897 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:22:31.683145   40897 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0817 21:22:31.683467   40897 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	W0817 21:22:32.083579   40897 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0817 21:22:32.083786   40897 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.18.20"
	W0817 21:22:32.102211   40897 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0817 21:22:32.102378   40897 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.3-0"
	I0817 21:22:32.113720   40897 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.2"
	W0817 21:22:32.131061   40897 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0817 21:22:32.131262   40897 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.18.20"
	W0817 21:22:32.154240   40897 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0817 21:22:32.154427   40897 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.18.20"
	W0817 21:22:32.157546   40897 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0817 21:22:32.157773   40897 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.18.20"
	W0817 21:22:32.172578   40897 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0817 21:22:32.172780   40897 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns:1.6.7"
	W0817 21:22:32.272186   40897 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0817 21:22:32.272382   40897 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0817 21:22:32.385401   40897 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0817 21:22:32.385482   40897 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:22:32.385563   40897 ssh_runner.go:195] Run: which crictl
	I0817 21:22:32.468207   40897 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0817 21:22:32.468253   40897 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0817 21:22:32.468300   40897 ssh_runner.go:195] Run: which crictl
	I0817 21:22:32.679086   40897 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0817 21:22:32.679139   40897 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0817 21:22:32.679185   40897 ssh_runner.go:195] Run: which crictl
	I0817 21:22:32.900530   40897 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0817 21:22:32.900616   40897 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:22:32.900698   40897 ssh_runner.go:195] Run: which crictl
	I0817 21:22:32.920069   40897 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0817 21:22:32.920166   40897 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0817 21:22:32.920238   40897 ssh_runner.go:195] Run: which crictl
	I0817 21:22:32.920319   40897 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0817 21:22:32.920449   40897 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:22:32.920496   40897 ssh_runner.go:195] Run: which crictl
	I0817 21:22:32.920576   40897 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0817 21:22:32.920687   40897 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:22:32.920739   40897 ssh_runner.go:195] Run: which crictl
	I0817 21:22:32.920399   40897 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0817 21:22:32.920796   40897 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0817 21:22:32.920836   40897 ssh_runner.go:195] Run: which crictl
	I0817 21:22:32.925248   40897 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0817 21:22:32.925322   40897 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:22:32.925371   40897 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:22:32.925439   40897 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0817 21:22:32.925507   40897 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0817 21:22:32.945885   40897 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0817 21:22:32.945969   40897 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:22:32.947501   40897 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:22:33.164299   40897 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0817 21:22:33.164366   40897 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0817 21:22:33.164403   40897 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0817 21:22:33.164447   40897 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0817 21:22:33.164482   40897 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0817 21:22:33.165752   40897 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0817 21:22:33.165841   40897 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0817 21:22:33.165887   40897 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0817 21:22:33.165917   40897 cache_images.go:92] LoadImages completed in 1.485329826s
	W0817 21:22:33.165977   40897 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0817 21:22:33.166026   40897 ssh_runner.go:195] Run: sudo crictl info
	I0817 21:22:33.206966   40897 cni.go:84] Creating CNI manager for ""
	I0817 21:22:33.206990   40897 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:22:33.207002   40897 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:22:33.207019   40897 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-679314 NodeName:ingress-addon-legacy-679314 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0817 21:22:33.207149   40897 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-679314"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:22:33.207232   40897 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-679314 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-679314 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:22:33.207307   40897 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0817 21:22:33.218077   40897 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:22:33.218142   40897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 21:22:33.228628   40897 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0817 21:22:33.249139   40897 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0817 21:22:33.269491   40897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I0817 21:22:33.290321   40897 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 21:22:33.294605   40897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:22:33.307486   40897 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314 for IP: 192.168.49.2
	I0817 21:22:33.307516   40897 certs.go:190] acquiring lock for shared ca certs: {Name:mk058988a603cd06c6d056488c4bdaf60bd886a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:22:33.307682   40897 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.key
	I0817 21:22:33.307738   40897 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.key
	I0817 21:22:33.307795   40897 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.key
	I0817 21:22:33.307810   40897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt with IP's: []
	I0817 21:22:33.781517   40897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt ...
	I0817 21:22:33.781548   40897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: {Name:mke35e24c846a65cf3c3fd750aa58798bf3ea642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:22:33.781752   40897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.key ...
	I0817 21:22:33.781764   40897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.key: {Name:mk34d42e9331f0f17ab5f26a9bc7314e282775d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:22:33.781858   40897 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.key.dd3b5fb2
	I0817 21:22:33.781874   40897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 21:22:34.059095   40897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.crt.dd3b5fb2 ...
	I0817 21:22:34.059127   40897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.crt.dd3b5fb2: {Name:mk05122118277944e7bf745058180e928627d65a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:22:34.059333   40897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.key.dd3b5fb2 ...
	I0817 21:22:34.059345   40897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.key.dd3b5fb2: {Name:mkc9a5d83356a6d4bcf19dc010bc5c0ace9d4cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:22:34.059426   40897 certs.go:337] copying /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.crt
	I0817 21:22:34.059498   40897 certs.go:341] copying /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.key
	I0817 21:22:34.059555   40897 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/proxy-client.key
	I0817 21:22:34.059571   40897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/proxy-client.crt with IP's: []
	I0817 21:22:35.549408   40897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/proxy-client.crt ...
	I0817 21:22:35.549439   40897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/proxy-client.crt: {Name:mkb4232ed90822e2cd335252b21930ac87a7bb70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:22:35.549620   40897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/proxy-client.key ...
	I0817 21:22:35.549634   40897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/proxy-client.key: {Name:mk4d5234b789af3fa3234266739b1c5aa2e669bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:22:35.549715   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0817 21:22:35.549735   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0817 21:22:35.549759   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0817 21:22:35.549774   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0817 21:22:35.549786   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0817 21:22:35.549802   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0817 21:22:35.549815   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0817 21:22:35.549830   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0817 21:22:35.549880   40897 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/7745.pem (1338 bytes)
	W0817 21:22:35.549919   40897 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/7745_empty.pem, impossibly tiny 0 bytes
	I0817 21:22:35.549932   40897 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem (1675 bytes)
	I0817 21:22:35.550014   40897 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem (1078 bytes)
	I0817 21:22:35.550047   40897 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:22:35.550074   40897 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem (1675 bytes)
	I0817 21:22:35.550125   40897 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem (1708 bytes)
	I0817 21:22:35.550156   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:22:35.550175   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/7745.pem -> /usr/share/ca-certificates/7745.pem
	I0817 21:22:35.550189   40897 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem -> /usr/share/ca-certificates/77452.pem
	I0817 21:22:35.550750   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 21:22:35.579485   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 21:22:35.607476   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 21:22:35.635398   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 21:22:35.663415   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:22:35.691745   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 21:22:35.719420   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:22:35.747485   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 21:22:35.775085   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:22:35.803794   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/certs/7745.pem --> /usr/share/ca-certificates/7745.pem (1338 bytes)
	I0817 21:22:35.832412   40897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem --> /usr/share/ca-certificates/77452.pem (1708 bytes)
	I0817 21:22:35.861167   40897 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 21:22:35.882353   40897 ssh_runner.go:195] Run: openssl version
	I0817 21:22:35.889761   40897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7745.pem && ln -fs /usr/share/ca-certificates/7745.pem /etc/ssl/certs/7745.pem"
	I0817 21:22:35.901281   40897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7745.pem
	I0817 21:22:35.905975   40897 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:18 /usr/share/ca-certificates/7745.pem
	I0817 21:22:35.906066   40897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7745.pem
	I0817 21:22:35.915091   40897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7745.pem /etc/ssl/certs/51391683.0"
	I0817 21:22:35.926765   40897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77452.pem && ln -fs /usr/share/ca-certificates/77452.pem /etc/ssl/certs/77452.pem"
	I0817 21:22:35.938202   40897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77452.pem
	I0817 21:22:35.942996   40897 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:18 /usr/share/ca-certificates/77452.pem
	I0817 21:22:35.943066   40897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77452.pem
	I0817 21:22:35.951692   40897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77452.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 21:22:35.962917   40897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:22:35.974855   40897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:22:35.979529   40897 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:12 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:22:35.979596   40897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:22:35.988251   40897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:22:35.999748   40897 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:22:36.004096   40897 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:22:36.004173   40897 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-679314 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-679314 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:22:36.004270   40897 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 21:22:36.004336   40897 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:22:36.049194   40897 cri.go:89] found id: ""
	I0817 21:22:36.049313   40897 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 21:22:36.060400   40897 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 21:22:36.071576   40897 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0817 21:22:36.071652   40897 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 21:22:36.083744   40897 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 21:22:36.083789   40897 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 21:22:36.141210   40897 kubeadm.go:322] W0817 21:22:36.140382    1106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0817 21:22:36.195935   40897 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-aws\n", err: exit status 1
	I0817 21:22:36.288209   40897 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 21:22:42.515124   40897 kubeadm.go:322] W0817 21:22:42.514834    1106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0817 21:22:42.524087   40897 kubeadm.go:322] W0817 21:22:42.516075    1106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0817 21:22:57.045560   40897 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0817 21:22:57.045611   40897 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 21:22:57.045702   40897 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0817 21:22:57.045756   40897 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-aws
	I0817 21:22:57.045798   40897 kubeadm.go:322] OS: Linux
	I0817 21:22:57.045853   40897 kubeadm.go:322] CGROUPS_CPU: enabled
	I0817 21:22:57.045940   40897 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0817 21:22:57.045993   40897 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0817 21:22:57.046046   40897 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0817 21:22:57.046127   40897 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0817 21:22:57.046191   40897 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0817 21:22:57.046273   40897 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 21:22:57.046382   40897 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 21:22:57.046485   40897 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 21:22:57.046624   40897 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:22:57.046710   40897 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:22:57.046754   40897 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 21:22:57.046824   40897 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 21:22:57.049136   40897 out.go:204]   - Generating certificates and keys ...
	I0817 21:22:57.049225   40897 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 21:22:57.049293   40897 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 21:22:57.049367   40897 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0817 21:22:57.049423   40897 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0817 21:22:57.049496   40897 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0817 21:22:57.049553   40897 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0817 21:22:57.049609   40897 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0817 21:22:57.049745   40897 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-679314 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0817 21:22:57.049811   40897 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0817 21:22:57.049965   40897 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-679314 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0817 21:22:57.050036   40897 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0817 21:22:57.050098   40897 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0817 21:22:57.050143   40897 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0817 21:22:57.050198   40897 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 21:22:57.050262   40897 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 21:22:57.050314   40897 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 21:22:57.050378   40897 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 21:22:57.050431   40897 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 21:22:57.050497   40897 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 21:22:57.052349   40897 out.go:204]   - Booting up control plane ...
	I0817 21:22:57.052458   40897 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 21:22:57.052545   40897 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 21:22:57.052634   40897 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 21:22:57.052723   40897 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 21:22:57.052893   40897 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 21:22:57.052984   40897 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.002538 seconds
	I0817 21:22:57.053104   40897 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 21:22:57.053239   40897 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 21:22:57.053307   40897 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 21:22:57.053446   40897 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-679314 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0817 21:22:57.053505   40897 kubeadm.go:322] [bootstrap-token] Using token: s90pia.ga9ntqr1u7lxanag
	I0817 21:22:57.056908   40897 out.go:204]   - Configuring RBAC rules ...
	I0817 21:22:57.057026   40897 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 21:22:57.057116   40897 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 21:22:57.057251   40897 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 21:22:57.057373   40897 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 21:22:57.057483   40897 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 21:22:57.057569   40897 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 21:22:57.057677   40897 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 21:22:57.057737   40897 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 21:22:57.057783   40897 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 21:22:57.057790   40897 kubeadm.go:322] 
	I0817 21:22:57.057851   40897 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 21:22:57.057858   40897 kubeadm.go:322] 
	I0817 21:22:57.057930   40897 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 21:22:57.057936   40897 kubeadm.go:322] 
	I0817 21:22:57.057960   40897 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 21:22:57.058018   40897 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 21:22:57.058068   40897 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 21:22:57.058077   40897 kubeadm.go:322] 
	I0817 21:22:57.058126   40897 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 21:22:57.058199   40897 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 21:22:57.058266   40897 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 21:22:57.058272   40897 kubeadm.go:322] 
	I0817 21:22:57.058350   40897 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 21:22:57.058424   40897 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 21:22:57.058431   40897 kubeadm.go:322] 
	I0817 21:22:57.058509   40897 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token s90pia.ga9ntqr1u7lxanag \
	I0817 21:22:57.058610   40897 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2eedc1a02cbc836dd125235c267520d762e5fc79fb87b3b821c98b561adbc76b \
	I0817 21:22:57.058731   40897 kubeadm.go:322]     --control-plane 
	I0817 21:22:57.058737   40897 kubeadm.go:322] 
	I0817 21:22:57.058820   40897 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 21:22:57.058829   40897 kubeadm.go:322] 
	I0817 21:22:57.058905   40897 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token s90pia.ga9ntqr1u7lxanag \
	I0817 21:22:57.059020   40897 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2eedc1a02cbc836dd125235c267520d762e5fc79fb87b3b821c98b561adbc76b 
	I0817 21:22:57.059035   40897 cni.go:84] Creating CNI manager for ""
	I0817 21:22:57.059047   40897 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:22:57.060917   40897 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 21:22:57.062738   40897 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0817 21:22:57.067992   40897 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0817 21:22:57.069743   40897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0817 21:22:57.104794   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 21:22:57.544451   40897 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 21:22:57.544579   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:22:57.544666   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=ingress-addon-legacy-679314 minikube.k8s.io/updated_at=2023_08_17T21_22_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:22:57.724990   40897 ops.go:34] apiserver oom_adj: -16
	I0817 21:22:57.725076   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:22:57.829004   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:22:58.420977   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:22:58.921172   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:22:59.421540   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:22:59.921631   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:00.420867   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:00.920860   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:01.421107   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:01.921466   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:02.420893   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:02.921787   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:03.421470   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:03.921030   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:04.421472   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:04.921808   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:05.421718   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:05.921521   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:06.420862   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:06.921504   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:07.420837   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:07.921525   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:08.420988   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:08.921599   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:09.420902   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:09.921661   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:10.420895   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:10.920868   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:11.420874   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:11.921372   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:12.421160   40897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:23:12.585460   40897 kubeadm.go:1081] duration metric: took 15.040914586s to wait for elevateKubeSystemPrivileges.
	I0817 21:23:12.585496   40897 kubeadm.go:406] StartCluster complete in 36.581352223s
	I0817 21:23:12.585511   40897 settings.go:142] acquiring lock: {Name:mk7a5a07825601654f691495799b769adb4489ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:23:12.585570   40897 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:23:12.586249   40897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/kubeconfig: {Name:mkf341824bbe915f226637e75b19e0928287e2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:23:12.586721   40897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 21:23:12.587006   40897 config.go:182] Loaded profile config "ingress-addon-legacy-679314": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0817 21:23:12.586969   40897 kapi.go:59] client config for ingress-addon-legacy-679314: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.key", CAFile:"/home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16ec6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:23:12.587119   40897 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 21:23:12.587197   40897 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-679314"
	I0817 21:23:12.587211   40897 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-679314"
	I0817 21:23:12.587261   40897 host.go:66] Checking if "ingress-addon-legacy-679314" exists ...
	I0817 21:23:12.587695   40897 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-679314 --format={{.State.Status}}
	I0817 21:23:12.588337   40897 cert_rotation.go:137] Starting client certificate rotation controller
	I0817 21:23:12.588672   40897 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-679314"
	I0817 21:23:12.588695   40897 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-679314"
	I0817 21:23:12.588969   40897 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-679314 --format={{.State.Status}}
	I0817 21:23:12.637784   40897 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:23:12.641844   40897 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:23:12.641865   40897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 21:23:12.641936   40897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-679314
	I0817 21:23:12.642762   40897 kapi.go:59] client config for ingress-addon-legacy-679314: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.key", CAFile:"/home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16ec6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:23:12.657464   40897 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-679314" context rescaled to 1 replicas
	I0817 21:23:12.657502   40897 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0817 21:23:12.659626   40897 out.go:177] * Verifying Kubernetes components...
	I0817 21:23:12.661660   40897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:23:12.659496   40897 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-679314"
	I0817 21:23:12.661773   40897 host.go:66] Checking if "ingress-addon-legacy-679314" exists ...
	I0817 21:23:12.662235   40897 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-679314 --format={{.State.Status}}
	I0817 21:23:12.713390   40897 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 21:23:12.713415   40897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 21:23:12.713485   40897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-679314
	I0817 21:23:12.714611   40897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/ingress-addon-legacy-679314/id_rsa Username:docker}
	I0817 21:23:12.743007   40897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/ingress-addon-legacy-679314/id_rsa Username:docker}
	I0817 21:23:12.921590   40897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 21:23:12.922025   40897 kapi.go:59] client config for ingress-addon-legacy-679314: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.key", CAFile:"/home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16ec6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:23:12.922339   40897 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-679314" to be "Ready" ...
	I0817 21:23:12.931960   40897 node_ready.go:49] node "ingress-addon-legacy-679314" has status "Ready":"True"
	I0817 21:23:12.931991   40897 node_ready.go:38] duration metric: took 9.627288ms waiting for node "ingress-addon-legacy-679314" to be "Ready" ...
	I0817 21:23:12.932003   40897 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:23:12.943460   40897 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-jbfpk" in "kube-system" namespace to be "Ready" ...
	I0817 21:23:12.988364   40897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:23:12.994191   40897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 21:23:13.563077   40897 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0817 21:23:13.780589   40897 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 21:23:13.782819   40897 addons.go:502] enable addons completed in 1.195689955s: enabled=[storage-provisioner default-storageclass]
	I0817 21:23:14.974850   40897 pod_ready.go:102] pod "coredns-66bff467f8-jbfpk" in "kube-system" namespace has status "Ready":"False"
	I0817 21:23:16.975752   40897 pod_ready.go:102] pod "coredns-66bff467f8-jbfpk" in "kube-system" namespace has status "Ready":"False"
	I0817 21:23:19.474660   40897 pod_ready.go:102] pod "coredns-66bff467f8-jbfpk" in "kube-system" namespace has status "Ready":"False"
	I0817 21:23:20.471519   40897 pod_ready.go:97] error getting pod "coredns-66bff467f8-jbfpk" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-jbfpk" not found
	I0817 21:23:20.471550   40897 pod_ready.go:81] duration metric: took 7.528062328s waiting for pod "coredns-66bff467f8-jbfpk" in "kube-system" namespace to be "Ready" ...
	E0817 21:23:20.471561   40897 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-jbfpk" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-jbfpk" not found
	I0817 21:23:20.471569   40897 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-vtc5t" in "kube-system" namespace to be "Ready" ...
	I0817 21:23:22.488158   40897 pod_ready.go:102] pod "coredns-66bff467f8-vtc5t" in "kube-system" namespace has status "Ready":"False"
	I0817 21:23:24.488960   40897 pod_ready.go:102] pod "coredns-66bff467f8-vtc5t" in "kube-system" namespace has status "Ready":"False"
	I0817 21:23:26.986818   40897 pod_ready.go:102] pod "coredns-66bff467f8-vtc5t" in "kube-system" namespace has status "Ready":"False"
	I0817 21:23:28.486241   40897 pod_ready.go:92] pod "coredns-66bff467f8-vtc5t" in "kube-system" namespace has status "Ready":"True"
	I0817 21:23:28.486266   40897 pod_ready.go:81] duration metric: took 8.014689368s waiting for pod "coredns-66bff467f8-vtc5t" in "kube-system" namespace to be "Ready" ...
	I0817 21:23:28.486277   40897 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-679314" in "kube-system" namespace to be "Ready" ...
	I0817 21:23:28.490658   40897 pod_ready.go:92] pod "etcd-ingress-addon-legacy-679314" in "kube-system" namespace has status "Ready":"True"
	I0817 21:23:28.490721   40897 pod_ready.go:81] duration metric: took 4.433084ms waiting for pod "etcd-ingress-addon-legacy-679314" in "kube-system" namespace to be "Ready" ...
	I0817 21:23:28.490741   40897 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-679314" in "kube-system" namespace to be "Ready" ...
	I0817 21:23:28.495369   40897 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-679314" in "kube-system" namespace has status "Ready":"True"
	I0817 21:23:28.495393   40897 pod_ready.go:81] duration metric: took 4.644084ms waiting for pod "kube-apiserver-ingress-addon-legacy-679314" in "kube-system" namespace to be "Ready" ...
	I0817 21:23:28.495404   40897 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-679314" in "kube-system" namespace to be "Ready" ...
	I0817 21:23:28.500595   40897 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-679314" in "kube-system" namespace has status "Ready":"True"
	I0817 21:23:28.500620   40897 pod_ready.go:81] duration metric: took 5.208513ms waiting for pod "kube-controller-manager-ingress-addon-legacy-679314" in "kube-system" namespace to be "Ready" ...
	I0817 21:23:28.500636   40897 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fcsnv" in "kube-system" namespace to be "Ready" ...
	I0817 21:23:28.504914   40897 pod_ready.go:92] pod "kube-proxy-fcsnv" in "kube-system" namespace has status "Ready":"True"
	I0817 21:23:28.504940   40897 pod_ready.go:81] duration metric: took 4.295421ms waiting for pod "kube-proxy-fcsnv" in "kube-system" namespace to be "Ready" ...
	I0817 21:23:28.504951   40897 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-679314" in "kube-system" namespace to be "Ready" ...
	I0817 21:23:28.682339   40897 request.go:628] Waited for 177.291559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-679314
	I0817 21:23:28.881407   40897 request.go:628] Waited for 196.242986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-679314
	I0817 21:23:28.884070   40897 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-679314" in "kube-system" namespace has status "Ready":"True"
	I0817 21:23:28.884096   40897 pod_ready.go:81] duration metric: took 379.136898ms waiting for pod "kube-scheduler-ingress-addon-legacy-679314" in "kube-system" namespace to be "Ready" ...
	I0817 21:23:28.884106   40897 pod_ready.go:38] duration metric: took 15.952090864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:23:28.884121   40897 api_server.go:52] waiting for apiserver process to appear ...
	I0817 21:23:28.884208   40897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:23:28.897424   40897 api_server.go:72] duration metric: took 16.2398846s to wait for apiserver process to appear ...
	I0817 21:23:28.897449   40897 api_server.go:88] waiting for apiserver healthz status ...
	I0817 21:23:28.897470   40897 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 21:23:28.906557   40897 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 21:23:28.907753   40897 api_server.go:141] control plane version: v1.18.20
	I0817 21:23:28.907780   40897 api_server.go:131] duration metric: took 10.325459ms to wait for apiserver health ...
	I0817 21:23:28.907790   40897 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 21:23:29.082138   40897 request.go:628] Waited for 174.287875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0817 21:23:29.087886   40897 system_pods.go:59] 8 kube-system pods found
	I0817 21:23:29.087923   40897 system_pods.go:61] "coredns-66bff467f8-vtc5t" [9e46424f-e070-4847-a731-345b76e5b868] Running
	I0817 21:23:29.087930   40897 system_pods.go:61] "etcd-ingress-addon-legacy-679314" [a1fb6aed-cd2f-4b47-a468-39e0d210043c] Running
	I0817 21:23:29.087935   40897 system_pods.go:61] "kindnet-5rvgv" [86fb5253-41ca-4cb7-ac33-027a4db857a7] Running
	I0817 21:23:29.087968   40897 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-679314" [773cc39d-bcc4-4c3c-aac6-e16b1e1821fa] Running
	I0817 21:23:29.087983   40897 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-679314" [0256a09e-c489-4b53-9dce-00dc0515bebc] Running
	I0817 21:23:29.087988   40897 system_pods.go:61] "kube-proxy-fcsnv" [bf512f9f-7a05-4ae3-b4ee-6a179ed48d60] Running
	I0817 21:23:29.087993   40897 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-679314" [7be29b5a-4eb1-49ec-9044-d69e2e037777] Running
	I0817 21:23:29.088006   40897 system_pods.go:61] "storage-provisioner" [d3a8ea8b-315c-402b-acc4-a16aa70ed93a] Running
	I0817 21:23:29.088014   40897 system_pods.go:74] duration metric: took 180.220315ms to wait for pod list to return data ...
	I0817 21:23:29.088022   40897 default_sa.go:34] waiting for default service account to be created ...
	I0817 21:23:29.281355   40897 request.go:628] Waited for 193.240738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0817 21:23:29.283776   40897 default_sa.go:45] found service account: "default"
	I0817 21:23:29.283800   40897 default_sa.go:55] duration metric: took 195.748889ms for default service account to be created ...
	I0817 21:23:29.283811   40897 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 21:23:29.482225   40897 request.go:628] Waited for 198.337236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0817 21:23:29.487794   40897 system_pods.go:86] 8 kube-system pods found
	I0817 21:23:29.487826   40897 system_pods.go:89] "coredns-66bff467f8-vtc5t" [9e46424f-e070-4847-a731-345b76e5b868] Running
	I0817 21:23:29.487834   40897 system_pods.go:89] "etcd-ingress-addon-legacy-679314" [a1fb6aed-cd2f-4b47-a468-39e0d210043c] Running
	I0817 21:23:29.487839   40897 system_pods.go:89] "kindnet-5rvgv" [86fb5253-41ca-4cb7-ac33-027a4db857a7] Running
	I0817 21:23:29.487844   40897 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-679314" [773cc39d-bcc4-4c3c-aac6-e16b1e1821fa] Running
	I0817 21:23:29.487852   40897 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-679314" [0256a09e-c489-4b53-9dce-00dc0515bebc] Running
	I0817 21:23:29.487857   40897 system_pods.go:89] "kube-proxy-fcsnv" [bf512f9f-7a05-4ae3-b4ee-6a179ed48d60] Running
	I0817 21:23:29.487862   40897 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-679314" [7be29b5a-4eb1-49ec-9044-d69e2e037777] Running
	I0817 21:23:29.487873   40897 system_pods.go:89] "storage-provisioner" [d3a8ea8b-315c-402b-acc4-a16aa70ed93a] Running
	I0817 21:23:29.487879   40897 system_pods.go:126] duration metric: took 204.063338ms to wait for k8s-apps to be running ...
	I0817 21:23:29.487888   40897 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 21:23:29.487947   40897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:23:29.500984   40897 system_svc.go:56] duration metric: took 13.08516ms WaitForService to wait for kubelet.
	I0817 21:23:29.501011   40897 kubeadm.go:581] duration metric: took 16.843476304s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 21:23:29.501030   40897 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:23:29.681331   40897 request.go:628] Waited for 180.230965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0817 21:23:29.684122   40897 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0817 21:23:29.684158   40897 node_conditions.go:123] node cpu capacity is 2
	I0817 21:23:29.684170   40897 node_conditions.go:105] duration metric: took 183.134877ms to run NodePressure ...
	I0817 21:23:29.684181   40897 start.go:228] waiting for startup goroutines ...
	I0817 21:23:29.684188   40897 start.go:233] waiting for cluster config update ...
	I0817 21:23:29.684198   40897 start.go:242] writing updated cluster config ...
	I0817 21:23:29.684502   40897 ssh_runner.go:195] Run: rm -f paused
	I0817 21:23:29.745812   40897 start.go:600] kubectl: 1.28.0, cluster: 1.18.20 (minor skew: 10)
	I0817 21:23:29.748495   40897 out.go:177] 
	W0817 21:23:29.750893   40897 out.go:239] ! /usr/local/bin/kubectl is version 1.28.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0817 21:23:29.753237   40897 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0817 21:23:29.755341   40897 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-679314" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6836b7471570d       13753a81eccfd       11 seconds ago       Exited              hello-world-app           2                   02887532ad582       hello-world-app-5f5d8b66bb-68g7s
	243862f0d7a6a       397432849901d       36 seconds ago       Running             nginx                     0                   f2783f8d1857b       nginx
	5d6a6b940a52f       d7f0cba3aa5bf       50 seconds ago       Exited              controller                0                   c08e628a6a430       ingress-nginx-controller-7fcf777cb7-k5v82
	51f1d113a200a       a883f7fc35610       56 seconds ago       Exited              patch                     0                   b11fce540f507       ingress-nginx-admission-patch-bz8kl
	705b7f7899db0       a883f7fc35610       56 seconds ago       Exited              create                    0                   b1a3103734231       ingress-nginx-admission-create-4j2mh
	d22fd262ca180       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   afed9a58d5a9a       coredns-66bff467f8-vtc5t
	01ee03f89ad61       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   92431bd985dc8       storage-provisioner
	456f37fd5cc68       b18bf71b941ba       About a minute ago   Running             kindnet-cni               0                   05113c455f6b7       kindnet-5rvgv
	813fd4d0eb6d7       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   cbfd3de6d5d5b       kube-proxy-fcsnv
	9eaacf24fbe48       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   7eb7b6140543d       etcd-ingress-addon-legacy-679314
	28ead3f67f833       095f37015706d       About a minute ago   Running             kube-scheduler            0                   2e5047d0278b0       kube-scheduler-ingress-addon-legacy-679314
	980dabb54b6e5       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   fa3a6d467a17f       kube-apiserver-ingress-addon-legacy-679314
	ce14f390abd8c       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   7ac1f6d3adcb7       kube-controller-manager-ingress-addon-legacy-679314
	
	* 
	* ==> containerd <==
	* Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.403886325Z" level=info msg="shim disconnected" id=5d6a6b940a52ff58ff246a804b903f6ab86c21b9f945fe6e5c7ec87e39562f9b
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.403948773Z" level=warning msg="cleaning up after shim disconnected" id=5d6a6b940a52ff58ff246a804b903f6ab86c21b9f945fe6e5c7ec87e39562f9b namespace=k8s.io
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.403961540Z" level=info msg="cleaning up dead shim"
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.414699087Z" level=warning msg="cleanup warnings time=\"2023-08-17T21:24:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4575 runtime=io.containerd.runc.v2\n"
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.417455621Z" level=info msg="StopContainer for \"5d6a6b940a52ff58ff246a804b903f6ab86c21b9f945fe6e5c7ec87e39562f9b\" returns successfully"
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.417605370Z" level=info msg="StopContainer for \"5d6a6b940a52ff58ff246a804b903f6ab86c21b9f945fe6e5c7ec87e39562f9b\" returns successfully"
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.418225315Z" level=info msg="StopPodSandbox for \"c08e628a6a430d564a75e84d80aca06065af323c455949deafe00597647cb6cb\""
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.418304723Z" level=info msg="Container to stop \"5d6a6b940a52ff58ff246a804b903f6ab86c21b9f945fe6e5c7ec87e39562f9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.420622281Z" level=info msg="StopPodSandbox for \"c08e628a6a430d564a75e84d80aca06065af323c455949deafe00597647cb6cb\""
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.420888582Z" level=info msg="Container to stop \"5d6a6b940a52ff58ff246a804b903f6ab86c21b9f945fe6e5c7ec87e39562f9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.458161310Z" level=info msg="shim disconnected" id=c08e628a6a430d564a75e84d80aca06065af323c455949deafe00597647cb6cb
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.458243852Z" level=warning msg="cleaning up after shim disconnected" id=c08e628a6a430d564a75e84d80aca06065af323c455949deafe00597647cb6cb namespace=k8s.io
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.458255503Z" level=info msg="cleaning up dead shim"
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.468914520Z" level=warning msg="cleanup warnings time=\"2023-08-17T21:24:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4611 runtime=io.containerd.runc.v2\n"
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.526899722Z" level=info msg="TearDown network for sandbox \"c08e628a6a430d564a75e84d80aca06065af323c455949deafe00597647cb6cb\" successfully"
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.526943684Z" level=info msg="StopPodSandbox for \"c08e628a6a430d564a75e84d80aca06065af323c455949deafe00597647cb6cb\" returns successfully"
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.539051408Z" level=info msg="TearDown network for sandbox \"c08e628a6a430d564a75e84d80aca06065af323c455949deafe00597647cb6cb\" successfully"
	Aug 17 21:24:23 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:23.539099497Z" level=info msg="StopPodSandbox for \"c08e628a6a430d564a75e84d80aca06065af323c455949deafe00597647cb6cb\" returns successfully"
	Aug 17 21:24:24 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:24.401652863Z" level=info msg="StopContainer for \"5d6a6b940a52ff58ff246a804b903f6ab86c21b9f945fe6e5c7ec87e39562f9b\" with timeout 2 (s)"
	Aug 17 21:24:24 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:24.401720932Z" level=info msg="Container to stop \"5d6a6b940a52ff58ff246a804b903f6ab86c21b9f945fe6e5c7ec87e39562f9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 21:24:24 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:24.401787228Z" level=info msg="StopContainer for \"5d6a6b940a52ff58ff246a804b903f6ab86c21b9f945fe6e5c7ec87e39562f9b\" returns successfully"
	Aug 17 21:24:24 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:24.402797263Z" level=info msg="StopPodSandbox for \"c08e628a6a430d564a75e84d80aca06065af323c455949deafe00597647cb6cb\""
	Aug 17 21:24:24 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:24.402864617Z" level=info msg="Container to stop \"5d6a6b940a52ff58ff246a804b903f6ab86c21b9f945fe6e5c7ec87e39562f9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 21:24:24 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:24.430315196Z" level=info msg="TearDown network for sandbox \"c08e628a6a430d564a75e84d80aca06065af323c455949deafe00597647cb6cb\" successfully"
	Aug 17 21:24:24 ingress-addon-legacy-679314 containerd[822]: time="2023-08-17T21:24:24.430369423Z" level=info msg="StopPodSandbox for \"c08e628a6a430d564a75e84d80aca06065af323c455949deafe00597647cb6cb\" returns successfully"
	
	* 
	* ==> coredns [d22fd262ca180b3cb3db6e11e97bb930fd85085958223e0a0ec7b73a38463fbb] <==
	* [INFO] 10.244.0.5:47887 - 26974 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044536s
	[INFO] 10.244.0.5:47887 - 59468 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033985s
	[INFO] 10.244.0.5:40097 - 9254 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001966663s
	[INFO] 10.244.0.5:47887 - 55966 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005150841s
	[INFO] 10.244.0.5:47887 - 4363 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00101463s
	[INFO] 10.244.0.5:40097 - 13998 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000100494s
	[INFO] 10.244.0.5:47887 - 64119 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041977s
	[INFO] 10.244.0.5:39528 - 64684 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075124s
	[INFO] 10.244.0.5:45114 - 10506 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000037587s
	[INFO] 10.244.0.5:39528 - 1251 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000029555s
	[INFO] 10.244.0.5:39528 - 46506 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042683s
	[INFO] 10.244.0.5:45114 - 16325 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000026068s
	[INFO] 10.244.0.5:45114 - 6752 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000088097s
	[INFO] 10.244.0.5:39528 - 4113 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030564s
	[INFO] 10.244.0.5:45114 - 11750 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035955s
	[INFO] 10.244.0.5:39528 - 36870 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000024164s
	[INFO] 10.244.0.5:45114 - 17755 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050502s
	[INFO] 10.244.0.5:39528 - 43567 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027544s
	[INFO] 10.244.0.5:45114 - 29621 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041312s
	[INFO] 10.244.0.5:39528 - 22487 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001626255s
	[INFO] 10.244.0.5:45114 - 60463 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000860342s
	[INFO] 10.244.0.5:39528 - 35625 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000863125s
	[INFO] 10.244.0.5:39528 - 40173 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000068775s
	[INFO] 10.244.0.5:45114 - 13574 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001156691s
	[INFO] 10.244.0.5:45114 - 11065 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000037054s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-679314
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-679314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=ingress-addon-legacy-679314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T21_22_57_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:22:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-679314
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 21:24:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:24:00 +0000   Thu, 17 Aug 2023 21:22:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:24:00 +0000   Thu, 17 Aug 2023 21:22:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:24:00 +0000   Thu, 17 Aug 2023 21:22:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:24:00 +0000   Thu, 17 Aug 2023 21:23:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-679314
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	System Info:
	  Machine ID:                 2780aacaa80e419aad786473b23bf9e4
	  System UUID:                bec55786-91f2-4a0a-aa37-862912a507c2
	  Boot ID:                    da56fcbe-e8d4-44e4-8927-1925d04822e5
	  Kernel Version:             5.15.0-1041-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-68g7s                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 coredns-66bff467f8-vtc5t                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     77s
	  kube-system                 etcd-ingress-addon-legacy-679314                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kindnet-5rvgv                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      78s
	  kube-system                 kube-apiserver-ingress-addon-legacy-679314             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-679314    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-fcsnv                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-ingress-addon-legacy-679314             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  104s (x4 over 105s)  kubelet     Node ingress-addon-legacy-679314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x4 over 105s)  kubelet     Node ingress-addon-legacy-679314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x4 over 105s)  kubelet     Node ingress-addon-legacy-679314 status is now: NodeHasSufficientPID
	  Normal  Starting                 89s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  89s                  kubelet     Node ingress-addon-legacy-679314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                  kubelet     Node ingress-addon-legacy-679314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                  kubelet     Node ingress-addon-legacy-679314 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  89s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                79s                  kubelet     Node ingress-addon-legacy-679314 status is now: NodeReady
	  Normal  Starting                 77s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000711] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000958] FS-Cache: N-cookie d=000000001c519bd9{9p.inode} n=00000000e816f04d
	[  +0.001056] FS-Cache: N-key=[8] '9c385c0100000000'
	[  +0.002499] FS-Cache: Duplicate cookie detected
	[  +0.000727] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000958] FS-Cache: O-cookie d=000000001c519bd9{9p.inode} n=000000000650bc75
	[  +0.001039] FS-Cache: O-key=[8] '9c385c0100000000'
	[  +0.000711] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000952] FS-Cache: N-cookie d=000000001c519bd9{9p.inode} n=00000000dd57f7a2
	[  +0.001042] FS-Cache: N-key=[8] '9c385c0100000000'
	[  +2.912460] FS-Cache: Duplicate cookie detected
	[  +0.000699] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000961] FS-Cache: O-cookie d=000000001c519bd9{9p.inode} n=0000000082ea674a
	[  +0.001069] FS-Cache: O-key=[8] '9b385c0100000000'
	[  +0.000712] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000937] FS-Cache: N-cookie d=000000001c519bd9{9p.inode} n=00000000e816f04d
	[  +0.001048] FS-Cache: N-key=[8] '9b385c0100000000'
	[  +0.378950] FS-Cache: Duplicate cookie detected
	[  +0.000751] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000978] FS-Cache: O-cookie d=000000001c519bd9{9p.inode} n=00000000488aa4bd
	[  +0.001095] FS-Cache: O-key=[8] 'a3385c0100000000'
	[  +0.000697] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=000000001c519bd9{9p.inode} n=00000000cecce696
	[  +0.001165] FS-Cache: N-key=[8] 'a3385c0100000000'
	[Aug17 21:22] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [9eaacf24fbe48be644f233e098abe996c4d9cac918762669ee0945ef6ba4abd3] <==
	* raft2023/08/17 21:22:48 INFO: aec36adc501070cc became follower at term 0
	raft2023/08/17 21:22:48 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/08/17 21:22:48 INFO: aec36adc501070cc became follower at term 1
	raft2023/08/17 21:22:48 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-17 21:22:49.088253 W | auth: simple token is not cryptographically signed
	2023-08-17 21:22:49.242820 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-08-17 21:22:49.290764 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/08/17 21:22:49 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-17 21:22:49.494638 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-08-17 21:22:49.572205 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-17 21:22:49.746877 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-08-17 21:22:49.782687 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/08/17 21:22:50 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/08/17 21:22:50 INFO: aec36adc501070cc became candidate at term 2
	raft2023/08/17 21:22:50 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/08/17 21:22:50 INFO: aec36adc501070cc became leader at term 2
	raft2023/08/17 21:22:50 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-08-17 21:22:50.055716 I | etcdserver: setting up the initial cluster version to 3.4
	2023-08-17 21:22:50.056538 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-08-17 21:22:50.056688 I | etcdserver/api: enabled capabilities for version 3.4
	2023-08-17 21:22:50.056819 I | etcdserver: published {Name:ingress-addon-legacy-679314 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-08-17 21:22:50.057035 I | embed: ready to serve client requests
	2023-08-17 21:22:50.058639 I | embed: ready to serve client requests
	2023-08-17 21:22:50.059352 I | embed: serving client requests on 192.168.49.2:2379
	2023-08-17 21:22:50.060729 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  21:24:29 up  1:06,  0 users,  load average: 0.81, 1.13, 0.73
	Linux ingress-addon-legacy-679314 5.15.0-1041-aws #46~20.04.1-Ubuntu SMP Wed Jul 19 15:39:29 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [456f37fd5cc68f8db63cb01c356ad24b816610fe7abf3bb5d25d1f667e7142c6] <==
	* I0817 21:23:14.488774       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0817 21:23:14.490756       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0817 21:23:14.490888       1 main.go:116] setting mtu 1500 for CNI 
	I0817 21:23:14.490899       1 main.go:146] kindnetd IP family: "ipv4"
	I0817 21:23:14.490910       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0817 21:23:14.885133       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:23:14.885344       1 main.go:227] handling current node
	I0817 21:23:24.899894       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:23:24.899926       1 main.go:227] handling current node
	I0817 21:23:34.909956       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:23:34.909984       1 main.go:227] handling current node
	I0817 21:23:44.921955       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:23:44.922136       1 main.go:227] handling current node
	I0817 21:23:54.925435       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:23:54.925463       1 main.go:227] handling current node
	I0817 21:24:04.937510       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:24:04.937541       1 main.go:227] handling current node
	I0817 21:24:14.941013       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:24:14.941043       1 main.go:227] handling current node
	I0817 21:24:24.953626       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:24:24.953653       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [980dabb54b6e55b4385f6eb3b2ed0f755b857f24fea61e9318160570c32e7d6d] <==
	* E0817 21:22:53.891221       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0817 21:22:54.078295       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0817 21:22:54.139271       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 21:22:54.139532       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 21:22:54.140322       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0817 21:22:54.141311       1 cache.go:39] Caches are synced for autoregister controller
	I0817 21:22:54.836661       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 21:22:54.836691       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 21:22:54.845043       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0817 21:22:54.849387       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0817 21:22:54.849413       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0817 21:22:55.284582       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 21:22:55.326071       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 21:22:55.472199       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0817 21:22:55.473308       1 controller.go:609] quota admission added evaluator for: endpoints
	I0817 21:22:55.476983       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 21:22:56.332665       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0817 21:22:56.885058       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0817 21:22:57.009980       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0817 21:23:00.362096       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 21:23:11.835248       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0817 21:23:12.049236       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0817 21:23:30.600951       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0817 21:23:50.167772       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0817 21:24:21.319720       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [ce14f390abd8c623191360608cb5c91691b18ca2e0b245ad87c86208fba6b44b] <==
	* I0817 21:23:12.138694       1 shared_informer.go:230] Caches are synced for service account 
	I0817 21:23:12.152542       1 shared_informer.go:230] Caches are synced for expand 
	I0817 21:23:12.184331       1 shared_informer.go:230] Caches are synced for PV protection 
	I0817 21:23:12.190394       1 shared_informer.go:230] Caches are synced for namespace 
	I0817 21:23:12.234487       1 shared_informer.go:230] Caches are synced for attach detach 
	I0817 21:23:12.269500       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0817 21:23:12.278260       1 shared_informer.go:230] Caches are synced for HPA 
	I0817 21:23:12.289499       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0817 21:23:12.315830       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0817 21:23:12.315872       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0817 21:23:12.325734       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0817 21:23:12.325754       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 21:23:12.331920       1 shared_informer.go:230] Caches are synced for resource quota 
	I0817 21:23:12.384522       1 shared_informer.go:230] Caches are synced for resource quota 
	I0817 21:23:12.658861       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"5cef5a44-13cd-4e0f-bbf0-2cba635d5e5e", APIVersion:"apps/v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0817 21:23:12.731531       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"319eff39-0f95-4ea4-adb6-9f590c444bf0", APIVersion:"apps/v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-jbfpk
	I0817 21:23:30.583923       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"9e5676da-641e-49d3-8029-06271cf6c85c", APIVersion:"apps/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0817 21:23:30.623254       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"60be50ad-b5e3-4bb2-8fc7-2965bd0a6635", APIVersion:"batch/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-4j2mh
	I0817 21:23:30.624928       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"13f0886f-66be-4c1f-8fe3-30dd3fc3b900", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-k5v82
	I0817 21:23:30.677552       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f1cbecad-ad23-41fc-a6d4-e825878426ba", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-bz8kl
	I0817 21:23:33.540598       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f1cbecad-ad23-41fc-a6d4-e825878426ba", APIVersion:"batch/v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0817 21:23:33.563977       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"60be50ad-b5e3-4bb2-8fc7-2965bd0a6635", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0817 21:23:58.900221       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"cffb1dbc-0382-4346-8e88-1931493cb3c8", APIVersion:"apps/v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0817 21:23:58.912303       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"51542e6a-0a14-4887-ac9b-202c088e0986", APIVersion:"apps/v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-68g7s
	E0817 21:24:25.986655       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-q289l" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [813fd4d0eb6d7a98fd717cf50ed1f13a65237dd3aaf09e43f69153a57eee461f] <==
	* W0817 21:23:12.813724       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0817 21:23:12.846852       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0817 21:23:12.846907       1 server_others.go:186] Using iptables Proxier.
	I0817 21:23:12.855713       1 server.go:583] Version: v1.18.20
	I0817 21:23:12.906482       1 config.go:315] Starting service config controller
	I0817 21:23:12.906506       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0817 21:23:12.906543       1 config.go:133] Starting endpoints config controller
	I0817 21:23:12.906547       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0817 21:23:13.007131       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0817 21:23:13.007228       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [28ead3f67f8338b1a8edfd5618f0bcadcf320a2f33e6ee3fcb2b9282db5e490d] <==
	* W0817 21:22:54.064788       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0817 21:22:54.064817       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 21:22:54.064825       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 21:22:54.064832       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 21:22:54.096168       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0817 21:22:54.096379       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0817 21:22:54.098684       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0817 21:22:54.098997       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 21:22:54.099169       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 21:22:54.099273       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 21:22:54.110485       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 21:22:54.111022       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 21:22:54.111210       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 21:22:54.111725       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 21:22:54.111849       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 21:22:54.112063       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 21:22:54.112201       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 21:22:54.112262       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 21:22:54.112359       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 21:22:54.112453       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 21:22:54.112468       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 21:22:54.113459       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 21:22:55.074983       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 21:22:55.240649       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 21:22:56.899429       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Aug 17 21:24:01 ingress-addon-legacy-679314 kubelet[1632]: E0817 21:24:01.648060    1632 pod_workers.go:191] Error syncing pod e544c27c-e3cd-4fe4-a6db-7a251391cbb5 ("kube-ingress-dns-minikube_kube-system(e544c27c-e3cd-4fe4-a6db-7a251391cbb5)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(e544c27c-e3cd-4fe4-a6db-7a251391cbb5)"
	Aug 17 21:24:01 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:01.656163    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9f9113833763bf652f4ebcae6077bbf91211710f83fb9e11ff5f28bbb0c5350d
	Aug 17 21:24:02 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:02.661538    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9f9113833763bf652f4ebcae6077bbf91211710f83fb9e11ff5f28bbb0c5350d
	Aug 17 21:24:02 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:02.662242    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3ee75c19e78f6360637c04c128856cad3a9a70aef6eb852e2af4030e252cab91
	Aug 17 21:24:02 ingress-addon-legacy-679314 kubelet[1632]: E0817 21:24:02.662581    1632 pod_workers.go:191] Error syncing pod 52d055dc-c8e6-4b91-aca9-f82e971ff8c5 ("hello-world-app-5f5d8b66bb-68g7s_default(52d055dc-c8e6-4b91-aca9-f82e971ff8c5)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-68g7s_default(52d055dc-c8e6-4b91-aca9-f82e971ff8c5)"
	Aug 17 21:24:03 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:03.665105    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3ee75c19e78f6360637c04c128856cad3a9a70aef6eb852e2af4030e252cab91
	Aug 17 21:24:03 ingress-addon-legacy-679314 kubelet[1632]: E0817 21:24:03.665371    1632 pod_workers.go:191] Error syncing pod 52d055dc-c8e6-4b91-aca9-f82e971ff8c5 ("hello-world-app-5f5d8b66bb-68g7s_default(52d055dc-c8e6-4b91-aca9-f82e971ff8c5)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-68g7s_default(52d055dc-c8e6-4b91-aca9-f82e971ff8c5)"
	Aug 17 21:24:14 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:14.814537    1632 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-6f9kk" (UniqueName: "kubernetes.io/secret/e544c27c-e3cd-4fe4-a6db-7a251391cbb5-minikube-ingress-dns-token-6f9kk") pod "e544c27c-e3cd-4fe4-a6db-7a251391cbb5" (UID: "e544c27c-e3cd-4fe4-a6db-7a251391cbb5")
	Aug 17 21:24:14 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:14.828056    1632 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544c27c-e3cd-4fe4-a6db-7a251391cbb5-minikube-ingress-dns-token-6f9kk" (OuterVolumeSpecName: "minikube-ingress-dns-token-6f9kk") pod "e544c27c-e3cd-4fe4-a6db-7a251391cbb5" (UID: "e544c27c-e3cd-4fe4-a6db-7a251391cbb5"). InnerVolumeSpecName "minikube-ingress-dns-token-6f9kk". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 17 21:24:14 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:14.914907    1632 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-6f9kk" (UniqueName: "kubernetes.io/secret/e544c27c-e3cd-4fe4-a6db-7a251391cbb5-minikube-ingress-dns-token-6f9kk") on node "ingress-addon-legacy-679314" DevicePath ""
	Aug 17 21:24:16 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:16.693812    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 757a2d5f1ddebc0dd0c13c08cb6fcfef5c35aec6e86e645e1391594896351c7f
	Aug 17 21:24:17 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:17.397257    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3ee75c19e78f6360637c04c128856cad3a9a70aef6eb852e2af4030e252cab91
	Aug 17 21:24:17 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:17.698479    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3ee75c19e78f6360637c04c128856cad3a9a70aef6eb852e2af4030e252cab91
	Aug 17 21:24:17 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:17.698821    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 6836b7471570d053eee1600f13ae7c644386394984faba7f3cb3f39a3e38f70e
	Aug 17 21:24:17 ingress-addon-legacy-679314 kubelet[1632]: E0817 21:24:17.699100    1632 pod_workers.go:191] Error syncing pod 52d055dc-c8e6-4b91-aca9-f82e971ff8c5 ("hello-world-app-5f5d8b66bb-68g7s_default(52d055dc-c8e6-4b91-aca9-f82e971ff8c5)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-68g7s_default(52d055dc-c8e6-4b91-aca9-f82e971ff8c5)"
	Aug 17 21:24:21 ingress-addon-legacy-679314 kubelet[1632]: E0817 21:24:21.301435    1632 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-k5v82.177c48ab7961d367", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-k5v82", UID:"84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-679314"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12fc1e151bbe167, ext:84471936594, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12fc1e151bbe167, ext:84471936594, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-k5v82.177c48ab7961d367" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 17 21:24:21 ingress-addon-legacy-679314 kubelet[1632]: E0817 21:24:21.344418    1632 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-k5v82.177c48ab7961d367", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-k5v82", UID:"84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-679314"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12fc1e151bbe167, ext:84471936594, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12fc1e1532cca74, ext:84496113503, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-k5v82.177c48ab7961d367" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 17 21:24:23 ingress-addon-legacy-679314 kubelet[1632]: W0817 21:24:23.714232    1632 pod_container_deletor.go:77] Container "c08e628a6a430d564a75e84d80aca06065af323c455949deafe00597647cb6cb" not found in pod's containers
	Aug 17 21:24:25 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:25.461720    1632 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-btqjq" (UniqueName: "kubernetes.io/secret/84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5-ingress-nginx-token-btqjq") pod "84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5" (UID: "84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5")
	Aug 17 21:24:25 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:25.461769    1632 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5-webhook-cert") pod "84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5" (UID: "84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5")
	Aug 17 21:24:25 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:25.467663    1632 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5-ingress-nginx-token-btqjq" (OuterVolumeSpecName: "ingress-nginx-token-btqjq") pod "84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5" (UID: "84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5"). InnerVolumeSpecName "ingress-nginx-token-btqjq". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 17 21:24:25 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:25.473618    1632 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5" (UID: "84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 17 21:24:25 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:25.562109    1632 reconciler.go:319] Volume detached for volume "ingress-nginx-token-btqjq" (UniqueName: "kubernetes.io/secret/84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5-ingress-nginx-token-btqjq") on node "ingress-addon-legacy-679314" DevicePath ""
	Aug 17 21:24:25 ingress-addon-legacy-679314 kubelet[1632]: I0817 21:24:25.562155    1632 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5-webhook-cert") on node "ingress-addon-legacy-679314" DevicePath ""
	Aug 17 21:24:26 ingress-addon-legacy-679314 kubelet[1632]: W0817 21:24:26.403900    1632 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/84dc0cb9-8e3d-4a2c-bf5a-d1d07afca9d5/volumes" does not exist
	
	* 
	* ==> storage-provisioner [01ee03f89ad61d3ea401191a8f4978d67eaec4ba3a13ffad6f905369eca11469] <==
	* I0817 21:23:15.885854       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 21:23:15.900440       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 21:23:15.900524       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 21:23:15.908279       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 21:23:15.908725       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"15d856af-8fa9-45e9-b26f-151a264f6ce2", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-679314_f55658cd-94bf-4fae-a559-2256ab9f70cf became leader
	I0817 21:23:15.908762       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-679314_f55658cd-94bf-4fae-a559-2256ab9f70cf!
	I0817 21:23:16.014013       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-679314_f55658cd-94bf-4fae-a559-2256ab9f70cf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-679314 -n ingress-addon-legacy-679314
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-679314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (49.81s)

                                                
                                    
x
+
TestMissingContainerUpgrade (221.4s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.22.0.2206395328.exe start -p missing-upgrade-790957 --memory=2200 --driver=docker  --container-runtime=containerd
E0817 21:44:24.657606    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.22.0.2206395328.exe start -p missing-upgrade-790957 --memory=2200 --driver=docker  --container-runtime=containerd: (1m46.453941799s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-790957
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-790957: (11.297025317s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-790957
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-790957 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0817 21:46:07.547092    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
version_upgrade_test.go:341: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-790957 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 90 (1m37.802638579s)

                                                
                                                
-- stdout --
	* [missing-upgrade-790957] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-790957 in cluster missing-upgrade-790957
	* Pulling base image ...
	* Downloading Kubernetes v1.21.2 preload ...
	* docker "missing-upgrade-790957" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:45:54.771334  123518 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:45:54.771446  123518 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:45:54.771452  123518 out.go:309] Setting ErrFile to fd 2...
	I0817 21:45:54.771457  123518 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:45:54.771692  123518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	I0817 21:45:54.772040  123518 out.go:303] Setting JSON to false
	I0817 21:45:54.773118  123518 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5294,"bootTime":1692303461,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0817 21:45:54.773204  123518 start.go:138] virtualization:  
	I0817 21:45:54.776957  123518 out.go:177] * [missing-upgrade-790957] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0817 21:45:54.779545  123518 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:45:54.781219  123518 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:45:54.779713  123518 notify.go:220] Checking for updates...
	I0817 21:45:54.787743  123518 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:45:54.789465  123518 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	I0817 21:45:54.791540  123518 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 21:45:54.793457  123518 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:45:54.795999  123518 config.go:182] Loaded profile config "missing-upgrade-790957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0817 21:45:54.798654  123518 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0817 21:45:54.800606  123518 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:45:54.835848  123518 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:45:54.836012  123518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:45:54.979122  123518 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2023-08-17 21:45:54.966252905 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:45:54.979221  123518 docker.go:294] overlay module found
	I0817 21:45:54.981515  123518 out.go:177] * Using the docker driver based on existing profile
	I0817 21:45:54.983440  123518 start.go:298] selected driver: docker
	I0817 21:45:54.983455  123518 start.go:902] validating driver "docker" against &{Name:missing-upgrade-790957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:missing-upgrade-790957 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:45:54.983569  123518 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:45:54.984897  123518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:45:55.096884  123518 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2023-08-17 21:45:55.085019291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:45:55.097204  123518 cni.go:84] Creating CNI manager for ""
	I0817 21:45:55.097214  123518 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:45:55.097223  123518 start_flags.go:319] config:
	{Name:missing-upgrade-790957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:missing-upgrade-790957 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0}
	I0817 21:45:55.099665  123518 out.go:177] * Starting control plane node missing-upgrade-790957 in cluster missing-upgrade-790957
	I0817 21:45:55.101603  123518 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0817 21:45:55.103414  123518 out.go:177] * Pulling base image ...
	I0817 21:45:55.105325  123518 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0817 21:45:55.105479  123518 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0817 21:45:55.141989  123518 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0817 21:45:55.142013  123518 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0817 21:45:55.175913  123518 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.21.2/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4
	I0817 21:45:55.175949  123518 cache.go:57] Caching tarball of preloaded images
	I0817 21:45:55.176497  123518 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0817 21:45:55.179542  123518 out.go:177] * Downloading Kubernetes v1.21.2 preload ...
	I0817 21:45:55.181410  123518 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:45:55.298695  123518 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.21.2/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:f1e1f7bdb5d08690c839f70306158850 -> /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4
	I0817 21:46:10.563706  123518 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:46:10.563870  123518 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:46:11.945773  123518 cache.go:60] Finished verifying existence of preloaded tar for  v1.21.2 on containerd
	I0817 21:46:11.945960  123518 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/missing-upgrade-790957/config.json ...
	I0817 21:46:11.946361  123518 cache.go:195] Successfully downloaded all kic artifacts
	I0817 21:46:11.946396  123518 start.go:365] acquiring machines lock for missing-upgrade-790957: {Name:mk12614e7c5adf325c8202b7057f18e0b53c0ac7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:46:11.946508  123518 start.go:369] acquired machines lock for "missing-upgrade-790957" in 75.683µs
	I0817 21:46:11.946555  123518 start.go:96] Skipping create...Using existing machine configuration
	I0817 21:46:11.946563  123518 fix.go:54] fixHost starting: 
	I0817 21:46:11.946971  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:11.966189  123518 cli_runner.go:211] docker container inspect missing-upgrade-790957 --format={{.State.Status}} returned with exit code 1
	I0817 21:46:11.966253  123518 fix.go:102] recreateIfNeeded on missing-upgrade-790957: state= err=unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:11.966279  123518 fix.go:107] machineExists: false. err=machine does not exist
	I0817 21:46:11.968423  123518 out.go:177] * docker "missing-upgrade-790957" container is missing, will recreate.
	I0817 21:46:11.970276  123518 delete.go:124] DEMOLISHING missing-upgrade-790957 ...
	I0817 21:46:11.970382  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:11.991032  123518 cli_runner.go:211] docker container inspect missing-upgrade-790957 --format={{.State.Status}} returned with exit code 1
	W0817 21:46:11.991085  123518 stop.go:75] unable to get state: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:11.991101  123518 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:11.991536  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:12.012856  123518 cli_runner.go:211] docker container inspect missing-upgrade-790957 --format={{.State.Status}} returned with exit code 1
	I0817 21:46:12.012922  123518 delete.go:82] Unable to get host status for missing-upgrade-790957, assuming it has already been deleted: state: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:12.012984  123518 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-790957
	W0817 21:46:12.031534  123518 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-790957 returned with exit code 1
	I0817 21:46:12.031572  123518 kic.go:367] could not find the container missing-upgrade-790957 to remove it. will try anyways
	I0817 21:46:12.031632  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:12.049756  123518 cli_runner.go:211] docker container inspect missing-upgrade-790957 --format={{.State.Status}} returned with exit code 1
	W0817 21:46:12.049835  123518 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:12.049908  123518 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-790957 /bin/bash -c "sudo init 0"
	W0817 21:46:12.068259  123518 cli_runner.go:211] docker exec --privileged -t missing-upgrade-790957 /bin/bash -c "sudo init 0" returned with exit code 1
	I0817 21:46:12.068296  123518 oci.go:647] error shutdown missing-upgrade-790957: docker exec --privileged -t missing-upgrade-790957 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:13.068458  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:13.087781  123518 cli_runner.go:211] docker container inspect missing-upgrade-790957 --format={{.State.Status}} returned with exit code 1
	I0817 21:46:13.087842  123518 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:13.087859  123518 oci.go:661] temporary error: container missing-upgrade-790957 status is  but expect it to be exited
	I0817 21:46:13.087889  123518 retry.go:31] will retry after 362.00286ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:13.450504  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:13.469103  123518 cli_runner.go:211] docker container inspect missing-upgrade-790957 --format={{.State.Status}} returned with exit code 1
	I0817 21:46:13.469162  123518 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:13.469175  123518 oci.go:661] temporary error: container missing-upgrade-790957 status is  but expect it to be exited
	I0817 21:46:13.469205  123518 retry.go:31] will retry after 520.693541ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:13.990955  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:14.016461  123518 cli_runner.go:211] docker container inspect missing-upgrade-790957 --format={{.State.Status}} returned with exit code 1
	I0817 21:46:14.016520  123518 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:14.016531  123518 oci.go:661] temporary error: container missing-upgrade-790957 status is  but expect it to be exited
	I0817 21:46:14.016554  123518 retry.go:31] will retry after 772.769123ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:14.790221  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:14.808634  123518 cli_runner.go:211] docker container inspect missing-upgrade-790957 --format={{.State.Status}} returned with exit code 1
	I0817 21:46:14.808695  123518 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:14.808710  123518 oci.go:661] temporary error: container missing-upgrade-790957 status is  but expect it to be exited
	I0817 21:46:14.808734  123518 retry.go:31] will retry after 1.090541052s: couldn't verify container is exited. %v: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:15.899736  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:15.923827  123518 cli_runner.go:211] docker container inspect missing-upgrade-790957 --format={{.State.Status}} returned with exit code 1
	I0817 21:46:15.923884  123518 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:15.923893  123518 oci.go:661] temporary error: container missing-upgrade-790957 status is  but expect it to be exited
	I0817 21:46:15.923915  123518 retry.go:31] will retry after 3.599882319s: couldn't verify container is exited. %v: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:19.526693  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:19.546026  123518 cli_runner.go:211] docker container inspect missing-upgrade-790957 --format={{.State.Status}} returned with exit code 1
	I0817 21:46:19.546093  123518 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:19.546106  123518 oci.go:661] temporary error: container missing-upgrade-790957 status is  but expect it to be exited
	I0817 21:46:19.546133  123518 retry.go:31] will retry after 2.884984167s: couldn't verify container is exited. %v: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:22.433057  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:22.450416  123518 cli_runner.go:211] docker container inspect missing-upgrade-790957 --format={{.State.Status}} returned with exit code 1
	I0817 21:46:22.450476  123518 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:22.450496  123518 oci.go:661] temporary error: container missing-upgrade-790957 status is  but expect it to be exited
	I0817 21:46:22.450519  123518 retry.go:31] will retry after 4.478969913s: couldn't verify container is exited. %v: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:26.929715  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:26.946682  123518 cli_runner.go:211] docker container inspect missing-upgrade-790957 --format={{.State.Status}} returned with exit code 1
	I0817 21:46:26.946748  123518 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	I0817 21:46:26.946760  123518 oci.go:661] temporary error: container missing-upgrade-790957 status is  but expect it to be exited
	I0817 21:46:26.946820  123518 oci.go:88] couldn't shut down missing-upgrade-790957 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-790957": docker container inspect missing-upgrade-790957 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-790957
	 
	I0817 21:46:26.946894  123518 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-790957
	I0817 21:46:26.964169  123518 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-790957
	W0817 21:46:26.984607  123518 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-790957 returned with exit code 1
	I0817 21:46:26.984688  123518 cli_runner.go:164] Run: docker network inspect missing-upgrade-790957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:46:27.003117  123518 cli_runner.go:164] Run: docker network rm missing-upgrade-790957
	I0817 21:46:27.104492  123518 fix.go:114] Sleeping 1 second for extra luck!
	I0817 21:46:28.104996  123518 start.go:125] createHost starting for "" (driver="docker")
	I0817 21:46:28.107457  123518 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0817 21:46:28.107602  123518 start.go:159] libmachine.API.Create for "missing-upgrade-790957" (driver="docker")
	I0817 21:46:28.107625  123518 client.go:168] LocalClient.Create starting
	I0817 21:46:28.107719  123518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem
	I0817 21:46:28.107758  123518 main.go:141] libmachine: Decoding PEM data...
	I0817 21:46:28.107777  123518 main.go:141] libmachine: Parsing certificate...
	I0817 21:46:28.107839  123518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem
	I0817 21:46:28.107862  123518 main.go:141] libmachine: Decoding PEM data...
	I0817 21:46:28.107878  123518 main.go:141] libmachine: Parsing certificate...
	I0817 21:46:28.108133  123518 cli_runner.go:164] Run: docker network inspect missing-upgrade-790957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 21:46:28.134151  123518 cli_runner.go:211] docker network inspect missing-upgrade-790957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 21:46:28.134226  123518 network_create.go:281] running [docker network inspect missing-upgrade-790957] to gather additional debugging logs...
	I0817 21:46:28.134241  123518 cli_runner.go:164] Run: docker network inspect missing-upgrade-790957
	W0817 21:46:28.165454  123518 cli_runner.go:211] docker network inspect missing-upgrade-790957 returned with exit code 1
	I0817 21:46:28.165482  123518 network_create.go:284] error running [docker network inspect missing-upgrade-790957]: docker network inspect missing-upgrade-790957: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-790957 not found
	I0817 21:46:28.165497  123518 network_create.go:286] output of [docker network inspect missing-upgrade-790957]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-790957 not found
	
	** /stderr **
	I0817 21:46:28.165562  123518 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:46:28.185606  123518 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4059c87ceb65 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ea:5b:90:7f} reservation:<nil>}
	I0817 21:46:28.185925  123518 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9f5c18d9b1e1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:8c:3e:76:10} reservation:<nil>}
	I0817 21:46:28.186233  123518 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c7bda8005d5b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:03:fb:cd:da} reservation:<nil>}
	I0817 21:46:28.187203  123518 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000ab5f60}
	I0817 21:46:28.187231  123518 network_create.go:123] attempt to create docker network missing-upgrade-790957 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0817 21:46:28.187288  123518 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-790957 missing-upgrade-790957
	I0817 21:46:28.258907  123518 network_create.go:107] docker network missing-upgrade-790957 192.168.76.0/24 created
	I0817 21:46:28.258937  123518 kic.go:117] calculated static IP "192.168.76.2" for the "missing-upgrade-790957" container
	I0817 21:46:28.259009  123518 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0817 21:46:28.283072  123518 cli_runner.go:164] Run: docker volume create missing-upgrade-790957 --label name.minikube.sigs.k8s.io=missing-upgrade-790957 --label created_by.minikube.sigs.k8s.io=true
	I0817 21:46:28.299591  123518 oci.go:103] Successfully created a docker volume missing-upgrade-790957
	I0817 21:46:28.299676  123518 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-790957-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-790957 --entrypoint /usr/bin/test -v missing-upgrade-790957:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0817 21:46:29.650596  123518 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-790957-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-790957 --entrypoint /usr/bin/test -v missing-upgrade-790957:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (1.350881766s)
	I0817 21:46:29.650705  123518 oci.go:107] Successfully prepared a docker volume missing-upgrade-790957
	I0817 21:46:29.650724  123518 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0817 21:46:29.650742  123518 kic.go:190] Starting extracting preloaded images to volume ...
	I0817 21:46:29.650827  123518 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-790957:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 21:46:35.480580  123518 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-790957:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (5.829716975s)
	I0817 21:46:35.480610  123518 kic.go:199] duration metric: took 5.829865 seconds to extract preloaded images to volume
	W0817 21:46:35.480743  123518 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0817 21:46:35.480849  123518 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 21:46:35.590421  123518 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-790957 --name missing-upgrade-790957 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-790957 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-790957 --network missing-upgrade-790957 --ip 192.168.76.2 --volume missing-upgrade-790957:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0817 21:46:36.080630  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Running}}
	I0817 21:46:36.147240  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	I0817 21:46:36.176262  123518 cli_runner.go:164] Run: docker exec missing-upgrade-790957 stat /var/lib/dpkg/alternatives/iptables
	I0817 21:46:36.279448  123518 oci.go:144] the created container "missing-upgrade-790957" has a running status.
	I0817 21:46:36.279487  123518 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa...
	I0817 21:46:37.623384  123518 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 21:46:37.653938  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	I0817 21:46:37.681752  123518 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 21:46:37.681773  123518 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-790957 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 21:46:37.794099  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	I0817 21:46:37.827116  123518 machine.go:88] provisioning docker machine ...
	I0817 21:46:37.827171  123518 ubuntu.go:169] provisioning hostname "missing-upgrade-790957"
	I0817 21:46:37.827247  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:37.870060  123518 main.go:141] libmachine: Using SSH client type: native
	I0817 21:46:37.870518  123518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32947 <nil> <nil>}
	I0817 21:46:37.870536  123518 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-790957 && echo "missing-upgrade-790957" | sudo tee /etc/hostname
	I0817 21:46:38.052381  123518 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-790957
	
	I0817 21:46:38.052523  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:38.079056  123518 main.go:141] libmachine: Using SSH client type: native
	I0817 21:46:38.079662  123518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32947 <nil> <nil>}
	I0817 21:46:38.079685  123518 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-790957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-790957/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-790957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:46:38.236825  123518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:46:38.236852  123518 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-2431/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-2431/.minikube}
	I0817 21:46:38.236882  123518 ubuntu.go:177] setting up certificates
	I0817 21:46:38.236898  123518 provision.go:83] configureAuth start
	I0817 21:46:38.236961  123518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-790957
	I0817 21:46:38.285214  123518 provision.go:138] copyHostCerts
	I0817 21:46:38.285282  123518 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem, removing ...
	I0817 21:46:38.285293  123518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem
	I0817 21:46:38.285363  123518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem (1078 bytes)
	I0817 21:46:38.285448  123518 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem, removing ...
	I0817 21:46:38.285457  123518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem
	I0817 21:46:38.285482  123518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem (1123 bytes)
	I0817 21:46:38.285535  123518 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem, removing ...
	I0817 21:46:38.285544  123518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem
	I0817 21:46:38.285567  123518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem (1675 bytes)
	I0817 21:46:38.285613  123518 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-790957 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-790957]
	I0817 21:46:38.515035  123518 provision.go:172] copyRemoteCerts
	I0817 21:46:38.515107  123518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:46:38.515153  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:38.532607  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:38.623730  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:46:38.648105  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0817 21:46:38.671242  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 21:46:38.694838  123518 provision.go:86] duration metric: configureAuth took 457.924153ms
	I0817 21:46:38.694900  123518 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:46:38.695098  123518 config.go:182] Loaded profile config "missing-upgrade-790957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0817 21:46:38.695112  123518 machine.go:91] provisioned docker machine in 867.948928ms
	I0817 21:46:38.695119  123518 client.go:171] LocalClient.Create took 10.58748882s
	I0817 21:46:38.695137  123518 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-790957" took 10.587534637s
	I0817 21:46:38.695148  123518 start.go:300] post-start starting for "missing-upgrade-790957" (driver="docker")
	I0817 21:46:38.695161  123518 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:46:38.695213  123518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:46:38.695258  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:38.714904  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:38.803996  123518 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:46:38.807721  123518 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:46:38.807746  123518 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:46:38.807758  123518 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:46:38.807764  123518 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 21:46:38.807773  123518 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/addons for local assets ...
	I0817 21:46:38.807825  123518 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/files for local assets ...
	I0817 21:46:38.807909  123518 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem -> 77452.pem in /etc/ssl/certs
	I0817 21:46:38.808009  123518 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:46:38.816721  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem --> /etc/ssl/certs/77452.pem (1708 bytes)
	I0817 21:46:38.839869  123518 start.go:303] post-start completed in 144.70785ms
	I0817 21:46:38.840225  123518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-790957
	I0817 21:46:38.864587  123518 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/missing-upgrade-790957/config.json ...
	I0817 21:46:38.864855  123518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:46:38.864897  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:38.882968  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:38.972653  123518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:46:38.978096  123518 start.go:128] duration metric: createHost completed in 10.873065656s
	I0817 21:46:38.978193  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:38.996021  123518 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 21:46:38.996047  123518 machine.go:88] provisioning docker machine ...
	I0817 21:46:38.996063  123518 ubuntu.go:169] provisioning hostname "missing-upgrade-790957"
	I0817 21:46:38.996124  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:39.015352  123518 main.go:141] libmachine: Using SSH client type: native
	I0817 21:46:39.015795  123518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32947 <nil> <nil>}
	I0817 21:46:39.015812  123518 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-790957 && echo "missing-upgrade-790957" | sudo tee /etc/hostname
	I0817 21:46:39.151133  123518 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-790957
	
	I0817 21:46:39.151210  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:39.170219  123518 main.go:141] libmachine: Using SSH client type: native
	I0817 21:46:39.170685  123518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32947 <nil> <nil>}
	I0817 21:46:39.170710  123518 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-790957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-790957/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-790957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:46:39.291708  123518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:46:39.291778  123518 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-2431/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-2431/.minikube}
	I0817 21:46:39.291809  123518 ubuntu.go:177] setting up certificates
	I0817 21:46:39.291846  123518 provision.go:83] configureAuth start
	I0817 21:46:39.291931  123518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-790957
	I0817 21:46:39.310315  123518 provision.go:138] copyHostCerts
	I0817 21:46:39.310385  123518 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem, removing ...
	I0817 21:46:39.310393  123518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem
	I0817 21:46:39.310481  123518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem (1078 bytes)
	I0817 21:46:39.310591  123518 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem, removing ...
	I0817 21:46:39.310596  123518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem
	I0817 21:46:39.310729  123518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem (1123 bytes)
	I0817 21:46:39.310815  123518 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem, removing ...
	I0817 21:46:39.310820  123518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem
	I0817 21:46:39.310844  123518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem (1675 bytes)
	I0817 21:46:39.310897  123518 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-790957 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-790957]
	I0817 21:46:39.561061  123518 provision.go:172] copyRemoteCerts
	I0817 21:46:39.561131  123518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:46:39.561204  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:39.582242  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:39.671660  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:46:39.695483  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0817 21:46:39.718915  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 21:46:39.742192  123518 provision.go:86] duration metric: configureAuth took 450.309272ms
	I0817 21:46:39.742220  123518 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:46:39.742398  123518 config.go:182] Loaded profile config "missing-upgrade-790957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0817 21:46:39.742411  123518 machine.go:91] provisioned docker machine in 746.358378ms
	I0817 21:46:39.742418  123518 start.go:300] post-start starting for "missing-upgrade-790957" (driver="docker")
	I0817 21:46:39.742427  123518 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:46:39.742477  123518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:46:39.742526  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:39.760284  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:39.851946  123518 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:46:39.855692  123518 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:46:39.855719  123518 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:46:39.855731  123518 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:46:39.855754  123518 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 21:46:39.855768  123518 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/addons for local assets ...
	I0817 21:46:39.855828  123518 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/files for local assets ...
	I0817 21:46:39.855908  123518 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem -> 77452.pem in /etc/ssl/certs
	I0817 21:46:39.856012  123518 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:46:39.864714  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem --> /etc/ssl/certs/77452.pem (1708 bytes)
	I0817 21:46:39.888145  123518 start.go:303] post-start completed in 145.712148ms
	I0817 21:46:39.888268  123518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:46:39.888332  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:39.906465  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:39.996247  123518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:46:40.001741  123518 fix.go:56] fixHost completed within 28.055171625s
	I0817 21:46:40.001764  123518 start.go:83] releasing machines lock for "missing-upgrade-790957", held for 28.055216482s
	I0817 21:46:40.001829  123518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-790957
	I0817 21:46:40.027831  123518 ssh_runner.go:195] Run: cat /version.json
	I0817 21:46:40.027887  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:40.027896  123518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:46:40.027960  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:40.048898  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:40.060110  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	W0817 21:46:40.255409  123518 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0817 21:46:40.255508  123518 ssh_runner.go:195] Run: systemctl --version
	I0817 21:46:40.264843  123518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:46:40.273261  123518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0817 21:46:40.313273  123518 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0817 21:46:40.313347  123518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:46:40.345453  123518 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 21:46:40.345521  123518 start.go:466] detecting cgroup driver to use...
	I0817 21:46:40.345564  123518 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0817 21:46:40.345640  123518 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0817 21:46:40.359075  123518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0817 21:46:40.372188  123518 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:46:40.372285  123518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:46:40.388174  123518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:46:40.404841  123518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0817 21:46:40.419631  123518 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0817 21:46:40.419710  123518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:46:40.538959  123518 docker.go:212] disabling docker service ...
	I0817 21:46:40.539018  123518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:46:40.564532  123518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:46:40.579179  123518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:46:40.710238  123518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:46:40.842944  123518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:46:40.856564  123518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:46:40.874687  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.4.1"|' /etc/containerd/config.toml"
	I0817 21:46:40.886724  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0817 21:46:40.897688  123518 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0817 21:46:40.897796  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0817 21:46:40.908546  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:46:40.920192  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0817 21:46:40.934813  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:46:40.951016  123518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:46:40.961867  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0817 21:46:40.976300  123518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:46:40.986182  123518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:46:40.995446  123518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:46:41.119417  123518 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0817 21:46:41.217354  123518 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 21:46:41.217420  123518 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0817 21:46:41.221964  123518 start.go:534] Will wait 60s for crictl version
	I0817 21:46:41.222026  123518 ssh_runner.go:195] Run: which crictl
	I0817 21:46:41.225856  123518 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:46:41.262130  123518 retry.go:31] will retry after 14.688849796s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:46:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0817 21:46:55.951201  123518 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:46:55.983968  123518 retry.go:31] will retry after 18.684623322s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:46:55Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0817 21:47:14.670729  123518 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:47:14.756584  123518 retry.go:31] will retry after 17.699083176s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0817 21:47:32.456561  123518 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:47:32.488543  123518 out.go:177] 
	W0817 21:47:32.490370  123518 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:32Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:32Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0817 21:47:32.490385  123518 out.go:239] * 
	* 
	W0817 21:47:32.491297  123518 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 21:47:32.494203  123518 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:343: failed missing container upgrade from v1.22.0. args: out/minikube-linux-arm64 start -p missing-upgrade-790957 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 90
version_upgrade_test.go:345: *** TestMissingContainerUpgrade FAILED at 2023-08-17 21:47:32.540869252 +0000 UTC m=+2203.659758766
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-790957
helpers_test.go:235: (dbg) docker inspect missing-upgrade-790957:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "60c431093cbc241fd9fc8eeb48086e3acb24c2a40ac1b14d2fc96ab4989d09f3",
	        "Created": "2023-08-17T21:46:35.616986219Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 125286,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-17T21:46:36.068980269Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/60c431093cbc241fd9fc8eeb48086e3acb24c2a40ac1b14d2fc96ab4989d09f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/60c431093cbc241fd9fc8eeb48086e3acb24c2a40ac1b14d2fc96ab4989d09f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/60c431093cbc241fd9fc8eeb48086e3acb24c2a40ac1b14d2fc96ab4989d09f3/hosts",
	        "LogPath": "/var/lib/docker/containers/60c431093cbc241fd9fc8eeb48086e3acb24c2a40ac1b14d2fc96ab4989d09f3/60c431093cbc241fd9fc8eeb48086e3acb24c2a40ac1b14d2fc96ab4989d09f3-json.log",
	        "Name": "/missing-upgrade-790957",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-790957:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-790957",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/189cc16d99a4e434c4dfab47629e126ac9e0ea2fa7c2627e5ba72e7118cbd844-init/diff:/var/lib/docker/overlay2/0cae421c19f4a354b47f39dda243d639a757574ea3764a91cde1f234e209a85b/diff:/var/lib/docker/overlay2/d257dabff6c9734eaa31683d4a2721d9d905b47b790b7952f63a17cad1bb2e72/diff:/var/lib/docker/overlay2/fdd8b523241e5c921f465ed8f15b88aabc05013980903e0e62d98eab6c38b358/diff:/var/lib/docker/overlay2/0f43c67c0f10a5d4009ac9596103634194768bb066d39fc797234631f8391fe4/diff:/var/lib/docker/overlay2/cf43d663c17eed5dccb171cf0467f3712357ee059b746b3f825c4385697b8328/diff:/var/lib/docker/overlay2/755da4aa25c1486f17630a6fa27ff0d92a8cc13e92aa7ea1b0e802fb8a31e939/diff:/var/lib/docker/overlay2/51941d1dd6ea2a217741b874e43f5590c430e443186c718e1e8d9ad25fcf3a7f/diff:/var/lib/docker/overlay2/c007a739d0520ef281b464c2ca55b802709a20d26f2fa8b9f6e493a2c1e554b4/diff:/var/lib/docker/overlay2/56321f90fc076ab76bb86f88bd1188e477631e0f742fd3c9716ff480590e62d0/diff:/var/lib/docker/overlay2/4cc8ac
3a9f6826937fe8bb551496d1130f2c7fcd4494b06e6dbef2b094925eca/diff:/var/lib/docker/overlay2/0663a1e71f0b912e108f6cb84eecb7d61c252ede2019929e504bbda328797949/diff:/var/lib/docker/overlay2/2be1b1f183633e3a09cb7ef9d9bbfaef5dce4e0b6961339a351532a10afc4cd0/diff:/var/lib/docker/overlay2/0ec823c0e633f1a1e622d99d211feb2910a68a9059ee7164dc0d7f64794c9c99/diff:/var/lib/docker/overlay2/f11c5c866621705eef2a8e91dd7a7d726438bcd95699477b5fc4248fbe4a0075/diff:/var/lib/docker/overlay2/f23ff18000216db9e166545f6561cc51107ad7c574893770a2c75899307617df/diff:/var/lib/docker/overlay2/f66888baa6d8c20ab074812149de593672240a229ad36598a77d5637a14fcb06/diff:/var/lib/docker/overlay2/0bb80fa3fea03f3e09e7de24f366a97df139bdf9221f498df2cd04d26c26a02a/diff:/var/lib/docker/overlay2/4fdb7a15a2f53cf1290d5d2c3b4bdd10af49d8e0c7085a2fd4bd564e874fdee1/diff:/var/lib/docker/overlay2/781df99ecbeec0e15a905965d156db186bc0a5f7732fb31aafc6a94a15443534/diff:/var/lib/docker/overlay2/3282cf0b5a481f0793fcde3bbc36b59642256bc0bc1d554ae197ddfdc0628581/diff:/var/lib/d
ocker/overlay2/c2bb9eec50adffaff366160dcb6224185d0967899eb100efc8571161586849fc/diff:/var/lib/docker/overlay2/2d60349eb800560efa081c403bcfa84ed80119e1b19e80fcd85540b1f9da31b8/diff:/var/lib/docker/overlay2/251707c40d9097b24c4752ff3ff03b71e9aa00269cc37e14ccf4f7d2173ea6c1/diff:/var/lib/docker/overlay2/e342ead4602baa4243c176c68390f2cb6809e1b5e98db0567ad4e3fc1f3f8677/diff:/var/lib/docker/overlay2/ed86a8b564411a73de284fa5f720c65a489a1870fc69ca48339c37fde5005cf4/diff:/var/lib/docker/overlay2/056ebe6f1104a7d27ca54695dacf7c2eeb6eb708d230268dee2fdd2fa7457e88/diff:/var/lib/docker/overlay2/fe31cca69bd45a10ea37d1b1666627a2680c1aee4c28c41d85d128b1574ece2b/diff:/var/lib/docker/overlay2/817827fdf3ad6e6389fc7d3ca1fd664541e7f651e7bd21b74e8401a51c8bd0c9/diff:/var/lib/docker/overlay2/b5c3cb2a387b99183141049a575d68e98596ba40821eaab196da78a7dafe4960/diff:/var/lib/docker/overlay2/62a7ccfdede4420ceb1923ca9ace9f74573e1e680c23f0c209f78cac95c98e47/diff:/var/lib/docker/overlay2/4fc64e03818078b630fa543d3b6066df1fa429d94a150a01f2ebf4ccd04
5786c/diff:/var/lib/docker/overlay2/f235b0c6c55f7bc8396f0c1508832f70deb9be4959ee9a469d3457cbe894a311/diff:/var/lib/docker/overlay2/f28f067b023a753235da26de22d99466f1273e7f956c4dcf9308666174cf9c35/diff:/var/lib/docker/overlay2/5e0888deb487c1bba30cbbb8f94dec1616460acefc061168387e6f3adce081c1/diff:/var/lib/docker/overlay2/5b55fa5932c77c5b1782747e094e8c21377d0dc2fc9a5afa0ba2cdb0cb32a6b1/diff:/var/lib/docker/overlay2/9791780a877f2b559b1c14329882fe8853f1658630c32ee1d13c54d793df286f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/189cc16d99a4e434c4dfab47629e126ac9e0ea2fa7c2627e5ba72e7118cbd844/merged",
	                "UpperDir": "/var/lib/docker/overlay2/189cc16d99a4e434c4dfab47629e126ac9e0ea2fa7c2627e5ba72e7118cbd844/diff",
	                "WorkDir": "/var/lib/docker/overlay2/189cc16d99a4e434c4dfab47629e126ac9e0ea2fa7c2627e5ba72e7118cbd844/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-790957",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-790957/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-790957",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-790957",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-790957",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "37e739de11db3ee065fbf7085ccbf5984d92bc761d63c15857384ea7c084a738",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32947"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32946"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32943"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32945"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32944"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/37e739de11db",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-790957": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "60c431093cbc",
	                        "missing-upgrade-790957"
	                    ],
	                    "NetworkID": "71bb36a6adbf5afd08017175ab6da1a6627ef109e231dfa2cfc24a11b94074a3",
	                    "EndpointID": "71fb00d024a21e52183724975c0d13f0315326be121346edb5ae216909ae1664",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-790957 -n missing-upgrade-790957
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-790957 -n missing-upgrade-790957: exit status 2 (349.449542ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMissingContainerUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMissingContainerUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p missing-upgrade-790957 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p missing-upgrade-790957 logs -n 25: (1.468970727s)
helpers_test.go:252: TestMissingContainerUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|-----------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |               Args                |           Profile           |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------|-----------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p scheduled-stop-906964          | scheduled-stop-906964       | jenkins | v1.31.2 | 17 Aug 23 21:42 UTC           |                               |
	|         | --schedule 15s                    |                             |         |         |                               |                               |
	| stop    | -p scheduled-stop-906964          | scheduled-stop-906964       | jenkins | v1.31.2 | 17 Aug 23 21:42 UTC           |                               |
	|         | --schedule 15s                    |                             |         |         |                               |                               |
	| stop    | -p scheduled-stop-906964          | scheduled-stop-906964       | jenkins | v1.31.2 | 17 Aug 23 21:42 UTC           |                               |
	|         | --schedule 15s                    |                             |         |         |                               |                               |
	| stop    | -p scheduled-stop-906964          | scheduled-stop-906964       | jenkins | v1.31.2 | 17 Aug 23 21:42 UTC           | 17 Aug 23 21:42 UTC           |
	|         | --cancel-scheduled                |                             |         |         |                               |                               |
	| stop    | -p scheduled-stop-906964          | scheduled-stop-906964       | jenkins | v1.31.2 | 17 Aug 23 21:42 UTC           |                               |
	|         | --schedule 15s                    |                             |         |         |                               |                               |
	| stop    | -p scheduled-stop-906964          | scheduled-stop-906964       | jenkins | v1.31.2 | 17 Aug 23 21:42 UTC           |                               |
	|         | --schedule 15s                    |                             |         |         |                               |                               |
	| stop    | -p scheduled-stop-906964          | scheduled-stop-906964       | jenkins | v1.31.2 | 17 Aug 23 21:42 UTC           | 17 Aug 23 21:43 UTC           |
	|         | --schedule 15s                    |                             |         |         |                               |                               |
	| delete  | -p scheduled-stop-906964          | scheduled-stop-906964       | jenkins | v1.31.2 | 17 Aug 23 21:43 UTC           | 17 Aug 23 21:43 UTC           |
	| start   | -p insufficient-storage-553805    | insufficient-storage-553805 | jenkins | v1.31.2 | 17 Aug 23 21:43 UTC           |                               |
	|         | --memory=2048 --output=json       |                             |         |         |                               |                               |
	|         | --wait=true --driver=docker       |                             |         |         |                               |                               |
	|         | --container-runtime=containerd    |                             |         |         |                               |                               |
	| delete  | -p insufficient-storage-553805    | insufficient-storage-553805 | jenkins | v1.31.2 | 17 Aug 23 21:43 UTC           | 17 Aug 23 21:43 UTC           |
	| start   | -p NoKubernetes-111976            | NoKubernetes-111976         | jenkins | v1.31.2 | 17 Aug 23 21:43 UTC           |                               |
	|         | --no-kubernetes                   |                             |         |         |                               |                               |
	|         | --kubernetes-version=1.20         |                             |         |         |                               |                               |
	|         | --driver=docker                   |                             |         |         |                               |                               |
	|         | --container-runtime=containerd    |                             |         |         |                               |                               |
	| start   | -p NoKubernetes-111976            | NoKubernetes-111976         | jenkins | v1.31.2 | 17 Aug 23 21:43 UTC           | 17 Aug 23 21:44 UTC           |
	|         | --driver=docker                   |                             |         |         |                               |                               |
	|         | --container-runtime=containerd    |                             |         |         |                               |                               |
	| start   | -p NoKubernetes-111976            | NoKubernetes-111976         | jenkins | v1.31.2 | 17 Aug 23 21:44 UTC           | 17 Aug 23 21:45 UTC           |
	|         | --no-kubernetes                   |                             |         |         |                               |                               |
	|         | --driver=docker                   |                             |         |         |                               |                               |
	|         | --container-runtime=containerd    |                             |         |         |                               |                               |
	| delete  | -p NoKubernetes-111976            | NoKubernetes-111976         | jenkins | v1.31.2 | 17 Aug 23 21:45 UTC           | 17 Aug 23 21:45 UTC           |
	| start   | -p NoKubernetes-111976            | NoKubernetes-111976         | jenkins | v1.31.2 | 17 Aug 23 21:45 UTC           | 17 Aug 23 21:45 UTC           |
	|         | --no-kubernetes                   |                             |         |         |                               |                               |
	|         | --driver=docker                   |                             |         |         |                               |                               |
	|         | --container-runtime=containerd    |                             |         |         |                               |                               |
	| ssh     | -p NoKubernetes-111976 sudo       | NoKubernetes-111976         | jenkins | v1.31.2 | 17 Aug 23 21:45 UTC           |                               |
	|         | systemctl is-active --quiet       |                             |         |         |                               |                               |
	|         | service kubelet                   |                             |         |         |                               |                               |
	| stop    | -p NoKubernetes-111976            | NoKubernetes-111976         | jenkins | v1.31.2 | 17 Aug 23 21:45 UTC           | 17 Aug 23 21:45 UTC           |
	| start   | -p NoKubernetes-111976            | NoKubernetes-111976         | jenkins | v1.31.2 | 17 Aug 23 21:45 UTC           | 17 Aug 23 21:45 UTC           |
	|         | --driver=docker                   |                             |         |         |                               |                               |
	|         | --container-runtime=containerd    |                             |         |         |                               |                               |
	| ssh     | -p NoKubernetes-111976 sudo       | NoKubernetes-111976         | jenkins | v1.31.2 | 17 Aug 23 21:45 UTC           |                               |
	|         | systemctl is-active --quiet       |                             |         |         |                               |                               |
	|         | service kubelet                   |                             |         |         |                               |                               |
	| delete  | -p NoKubernetes-111976            | NoKubernetes-111976         | jenkins | v1.31.2 | 17 Aug 23 21:45 UTC           | 17 Aug 23 21:45 UTC           |
	| start   | -p kubernetes-upgrade-483730      | kubernetes-upgrade-483730   | jenkins | v1.31.2 | 17 Aug 23 21:45 UTC           | 17 Aug 23 21:46 UTC           |
	|         | --memory=2200                     |                             |         |         |                               |                               |
	|         | --kubernetes-version=v1.16.0      |                             |         |         |                               |                               |
	|         | --alsologtostderr                 |                             |         |         |                               |                               |
	|         | -v=1 --driver=docker              |                             |         |         |                               |                               |
	|         | --container-runtime=containerd    |                             |         |         |                               |                               |
	| start   | -p missing-upgrade-790957         | missing-upgrade-790957      | jenkins | v1.22.0 | Thu, 17 Aug 2023 21:43:56 UTC | Thu, 17 Aug 2023 21:45:43 UTC |
	|         | --memory=2200 --driver=docker     |                             |         |         |                               |                               |
	|         | --container-runtime=containerd    |                             |         |         |                               |                               |
	| start   | -p missing-upgrade-790957         | missing-upgrade-790957      | jenkins | v1.31.2 | 17 Aug 23 21:45 UTC           |                               |
	|         | --memory=2200                     |                             |         |         |                               |                               |
	|         | --alsologtostderr                 |                             |         |         |                               |                               |
	|         | -v=1 --driver=docker              |                             |         |         |                               |                               |
	|         | --container-runtime=containerd    |                             |         |         |                               |                               |
	| stop    | -p kubernetes-upgrade-483730      | kubernetes-upgrade-483730   | jenkins | v1.31.2 | 17 Aug 23 21:46 UTC           | 17 Aug 23 21:46 UTC           |
	| start   | -p kubernetes-upgrade-483730      | kubernetes-upgrade-483730   | jenkins | v1.31.2 | 17 Aug 23 21:46 UTC           |                               |
	|         | --memory=2200                     |                             |         |         |                               |                               |
	|         | --kubernetes-version=v1.28.0-rc.1 |                             |         |         |                               |                               |
	|         | --alsologtostderr                 |                             |         |         |                               |                               |
	|         | -v=1 --driver=docker              |                             |         |         |                               |                               |
	|         | --container-runtime=containerd    |                             |         |         |                               |                               |
	|---------|-----------------------------------|-----------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:46:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:46:35.962582  125284 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:46:35.962865  125284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:46:35.962886  125284 out.go:309] Setting ErrFile to fd 2...
	I0817 21:46:35.962904  125284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:46:35.963170  125284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	I0817 21:46:35.963581  125284 out.go:303] Setting JSON to false
	I0817 21:46:35.964693  125284 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5335,"bootTime":1692303461,"procs":352,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0817 21:46:35.964783  125284 start.go:138] virtualization:  
	I0817 21:46:35.968259  125284 out.go:177] * [kubernetes-upgrade-483730] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0817 21:46:35.971012  125284 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:46:35.971105  125284 notify.go:220] Checking for updates...
	I0817 21:46:35.973739  125284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:46:35.975914  125284 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:46:35.978291  125284 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	I0817 21:46:35.981115  125284 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 21:46:35.986976  125284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:46:35.989537  125284 config.go:182] Loaded profile config "kubernetes-upgrade-483730": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0817 21:46:35.990077  125284 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:46:36.017250  125284 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:46:36.017352  125284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:46:36.206196  125284 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-17 21:46:36.189978291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:46:36.206303  125284 docker.go:294] overlay module found
	I0817 21:46:36.208748  125284 out.go:177] * Using the docker driver based on existing profile
	I0817 21:46:36.210926  125284 start.go:298] selected driver: docker
	I0817 21:46:36.210945  125284 start.go:902] validating driver "docker" against &{Name:kubernetes-upgrade-483730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-483730 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:46:36.211059  125284 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:46:36.211646  125284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:46:36.391657  125284 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-17 21:46:36.381372692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:46:36.392005  125284 cni.go:84] Creating CNI manager for ""
	I0817 21:46:36.392022  125284 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:46:36.392034  125284 start_flags.go:319] config:
	{Name:kubernetes-upgrade-483730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:kubernetes-upgrade-483730 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:46:36.394440  125284 out.go:177] * Starting control plane node kubernetes-upgrade-483730 in cluster kubernetes-upgrade-483730
	I0817 21:46:36.398230  125284 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0817 21:46:36.403612  125284 out.go:177] * Pulling base image ...
	I0817 21:46:36.406009  125284 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime containerd
	I0817 21:46:36.406061  125284 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I0817 21:46:36.406074  125284 cache.go:57] Caching tarball of preloaded images
	I0817 21:46:36.406135  125284 preload.go:174] Found /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 21:46:36.406148  125284 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.1 on containerd
	I0817 21:46:36.406264  125284 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kubernetes-upgrade-483730/config.json ...
	I0817 21:46:36.406454  125284 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:46:36.445023  125284 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0817 21:46:36.445048  125284 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0817 21:46:36.445073  125284 cache.go:195] Successfully downloaded all kic artifacts
	I0817 21:46:36.445116  125284 start.go:365] acquiring machines lock for kubernetes-upgrade-483730: {Name:mk3099099986a1e3e5864583627bd812d58df54c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:46:36.445173  125284 start.go:369] acquired machines lock for "kubernetes-upgrade-483730" in 33.829µs
	I0817 21:46:36.445190  125284 start.go:96] Skipping create...Using existing machine configuration
	I0817 21:46:36.445195  125284 fix.go:54] fixHost starting: 
	I0817 21:46:36.445794  125284 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-483730 --format={{.State.Status}}
	I0817 21:46:36.476963  125284 fix.go:102] recreateIfNeeded on kubernetes-upgrade-483730: state=Stopped err=<nil>
	W0817 21:46:36.477004  125284 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 21:46:36.479959  125284 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-483730" ...
	I0817 21:46:35.480580  123518 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-790957:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (5.829716975s)
	I0817 21:46:35.480610  123518 kic.go:199] duration metric: took 5.829865 seconds to extract preloaded images to volume
	W0817 21:46:35.480743  123518 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0817 21:46:35.480849  123518 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 21:46:35.590421  123518 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-790957 --name missing-upgrade-790957 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-790957 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-790957 --network missing-upgrade-790957 --ip 192.168.76.2 --volume missing-upgrade-790957:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0817 21:46:36.080630  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Running}}
	I0817 21:46:36.147240  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	I0817 21:46:36.176262  123518 cli_runner.go:164] Run: docker exec missing-upgrade-790957 stat /var/lib/dpkg/alternatives/iptables
	I0817 21:46:36.279448  123518 oci.go:144] the created container "missing-upgrade-790957" has a running status.
	I0817 21:46:36.279487  123518 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa...
	I0817 21:46:37.623384  123518 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 21:46:37.653938  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	I0817 21:46:37.681752  123518 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 21:46:37.681773  123518 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-790957 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 21:46:37.794099  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	I0817 21:46:37.827116  123518 machine.go:88] provisioning docker machine ...
	I0817 21:46:37.827171  123518 ubuntu.go:169] provisioning hostname "missing-upgrade-790957"
	I0817 21:46:37.827247  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:37.870060  123518 main.go:141] libmachine: Using SSH client type: native
	I0817 21:46:37.870518  123518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32947 <nil> <nil>}
	I0817 21:46:37.870536  123518 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-790957 && echo "missing-upgrade-790957" | sudo tee /etc/hostname
	I0817 21:46:38.052381  123518 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-790957
	
	I0817 21:46:38.052523  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:38.079056  123518 main.go:141] libmachine: Using SSH client type: native
	I0817 21:46:38.079662  123518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32947 <nil> <nil>}
	I0817 21:46:38.079685  123518 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-790957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-790957/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-790957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:46:38.236825  123518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:46:38.236852  123518 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-2431/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-2431/.minikube}
	I0817 21:46:38.236882  123518 ubuntu.go:177] setting up certificates
	I0817 21:46:38.236898  123518 provision.go:83] configureAuth start
	I0817 21:46:38.236961  123518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-790957
	I0817 21:46:38.285214  123518 provision.go:138] copyHostCerts
	I0817 21:46:38.285282  123518 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem, removing ...
	I0817 21:46:38.285293  123518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem
	I0817 21:46:38.285363  123518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem (1078 bytes)
	I0817 21:46:38.285448  123518 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem, removing ...
	I0817 21:46:38.285457  123518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem
	I0817 21:46:38.285482  123518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem (1123 bytes)
	I0817 21:46:38.285535  123518 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem, removing ...
	I0817 21:46:38.285544  123518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem
	I0817 21:46:38.285567  123518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem (1675 bytes)
	I0817 21:46:38.285613  123518 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-790957 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-790957]
	I0817 21:46:38.515035  123518 provision.go:172] copyRemoteCerts
	I0817 21:46:38.515107  123518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:46:38.515153  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:38.532607  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:38.623730  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:46:38.648105  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0817 21:46:38.671242  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 21:46:38.694838  123518 provision.go:86] duration metric: configureAuth took 457.924153ms
	I0817 21:46:38.694900  123518 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:46:38.695098  123518 config.go:182] Loaded profile config "missing-upgrade-790957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0817 21:46:38.695112  123518 machine.go:91] provisioned docker machine in 867.948928ms
	I0817 21:46:38.695119  123518 client.go:171] LocalClient.Create took 10.58748882s
	I0817 21:46:38.695137  123518 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-790957" took 10.587534637s
	I0817 21:46:38.695148  123518 start.go:300] post-start starting for "missing-upgrade-790957" (driver="docker")
	I0817 21:46:38.695161  123518 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:46:38.695213  123518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:46:38.695258  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:38.714904  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:38.803996  123518 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:46:38.807721  123518 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:46:38.807746  123518 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:46:38.807758  123518 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:46:38.807764  123518 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 21:46:38.807773  123518 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/addons for local assets ...
	I0817 21:46:38.807825  123518 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/files for local assets ...
	I0817 21:46:38.807909  123518 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem -> 77452.pem in /etc/ssl/certs
	I0817 21:46:38.808009  123518 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:46:38.816721  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem --> /etc/ssl/certs/77452.pem (1708 bytes)
	I0817 21:46:38.839869  123518 start.go:303] post-start completed in 144.70785ms
	I0817 21:46:38.840225  123518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-790957
	I0817 21:46:38.864587  123518 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/missing-upgrade-790957/config.json ...
	I0817 21:46:38.864855  123518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:46:38.864897  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:38.882968  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:38.972653  123518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:46:38.978096  123518 start.go:128] duration metric: createHost completed in 10.873065656s
	I0817 21:46:38.978193  123518 cli_runner.go:164] Run: docker container inspect missing-upgrade-790957 --format={{.State.Status}}
	W0817 21:46:38.996021  123518 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 21:46:38.996047  123518 machine.go:88] provisioning docker machine ...
	I0817 21:46:38.996063  123518 ubuntu.go:169] provisioning hostname "missing-upgrade-790957"
	I0817 21:46:38.996124  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:39.015352  123518 main.go:141] libmachine: Using SSH client type: native
	I0817 21:46:39.015795  123518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32947 <nil> <nil>}
	I0817 21:46:39.015812  123518 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-790957 && echo "missing-upgrade-790957" | sudo tee /etc/hostname
	I0817 21:46:39.151133  123518 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-790957
	
	I0817 21:46:39.151210  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:39.170219  123518 main.go:141] libmachine: Using SSH client type: native
	I0817 21:46:39.170685  123518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32947 <nil> <nil>}
	I0817 21:46:39.170710  123518 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-790957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-790957/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-790957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:46:39.291708  123518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:46:39.291778  123518 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-2431/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-2431/.minikube}
	I0817 21:46:39.291809  123518 ubuntu.go:177] setting up certificates
	I0817 21:46:39.291846  123518 provision.go:83] configureAuth start
	I0817 21:46:39.291931  123518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-790957
	I0817 21:46:39.310315  123518 provision.go:138] copyHostCerts
	I0817 21:46:39.310385  123518 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem, removing ...
	I0817 21:46:39.310393  123518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem
	I0817 21:46:39.310481  123518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem (1078 bytes)
	I0817 21:46:39.310591  123518 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem, removing ...
	I0817 21:46:39.310596  123518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem
	I0817 21:46:39.310729  123518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem (1123 bytes)
	I0817 21:46:39.310815  123518 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem, removing ...
	I0817 21:46:39.310820  123518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem
	I0817 21:46:39.310844  123518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem (1675 bytes)
	I0817 21:46:39.310897  123518 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-790957 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-790957]
	I0817 21:46:39.561061  123518 provision.go:172] copyRemoteCerts
	I0817 21:46:39.561131  123518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:46:39.561204  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:39.582242  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:39.671660  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:46:39.695483  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0817 21:46:39.718915  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 21:46:39.742192  123518 provision.go:86] duration metric: configureAuth took 450.309272ms
	I0817 21:46:39.742220  123518 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:46:39.742398  123518 config.go:182] Loaded profile config "missing-upgrade-790957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0817 21:46:39.742411  123518 machine.go:91] provisioned docker machine in 746.358378ms
	I0817 21:46:39.742418  123518 start.go:300] post-start starting for "missing-upgrade-790957" (driver="docker")
	I0817 21:46:39.742427  123518 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:46:39.742477  123518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:46:39.742526  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:39.760284  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:36.481944  125284 cli_runner.go:164] Run: docker start kubernetes-upgrade-483730
	I0817 21:46:37.090673  125284 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-483730 --format={{.State.Status}}
	I0817 21:46:37.142950  125284 kic.go:426] container "kubernetes-upgrade-483730" state is running.
	I0817 21:46:37.143347  125284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-483730
	I0817 21:46:37.189105  125284 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kubernetes-upgrade-483730/config.json ...
	I0817 21:46:37.189335  125284 machine.go:88] provisioning docker machine ...
	I0817 21:46:37.189367  125284 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-483730"
	I0817 21:46:37.189426  125284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-483730
	I0817 21:46:37.228057  125284 main.go:141] libmachine: Using SSH client type: native
	I0817 21:46:37.228509  125284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32952 <nil> <nil>}
	I0817 21:46:37.228526  125284 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-483730 && echo "kubernetes-upgrade-483730" | sudo tee /etc/hostname
	I0817 21:46:37.229247  125284 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0817 21:46:40.394289  125284 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-483730
	
	I0817 21:46:40.394405  125284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-483730
	I0817 21:46:40.424544  125284 main.go:141] libmachine: Using SSH client type: native
	I0817 21:46:40.424969  125284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39ff20] 0x3a28b0 <nil>  [] 0s} 127.0.0.1 32952 <nil> <nil>}
	I0817 21:46:40.424987  125284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-483730' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-483730/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-483730' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:46:40.575643  125284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:46:40.575712  125284 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-2431/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-2431/.minikube}
	I0817 21:46:40.575752  125284 ubuntu.go:177] setting up certificates
	I0817 21:46:40.575780  125284 provision.go:83] configureAuth start
	I0817 21:46:40.575851  125284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-483730
	I0817 21:46:40.600431  125284 provision.go:138] copyHostCerts
	I0817 21:46:40.600488  125284 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem, removing ...
	I0817 21:46:40.600515  125284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem
	I0817 21:46:40.600573  125284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/ca.pem (1078 bytes)
	I0817 21:46:40.600679  125284 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem, removing ...
	I0817 21:46:40.600684  125284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem
	I0817 21:46:40.600705  125284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/cert.pem (1123 bytes)
	I0817 21:46:40.600764  125284 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem, removing ...
	I0817 21:46:40.600768  125284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem
	I0817 21:46:40.600785  125284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-2431/.minikube/key.pem (1675 bytes)
	I0817 21:46:40.600835  125284 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-483730 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-483730]
	I0817 21:46:40.908844  125284 provision.go:172] copyRemoteCerts
	I0817 21:46:40.908914  125284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:46:40.908981  125284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-483730
	I0817 21:46:40.931152  125284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32952 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/kubernetes-upgrade-483730/id_rsa Username:docker}
	I0817 21:46:39.851946  123518 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:46:39.855692  123518 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:46:39.855719  123518 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:46:39.855731  123518 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:46:39.855754  123518 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 21:46:39.855768  123518 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/addons for local assets ...
	I0817 21:46:39.855828  123518 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/files for local assets ...
	I0817 21:46:39.855908  123518 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem -> 77452.pem in /etc/ssl/certs
	I0817 21:46:39.856012  123518 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:46:39.864714  123518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem --> /etc/ssl/certs/77452.pem (1708 bytes)
	I0817 21:46:39.888145  123518 start.go:303] post-start completed in 145.712148ms
	I0817 21:46:39.888268  123518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:46:39.888332  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:39.906465  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:39.996247  123518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:46:40.001741  123518 fix.go:56] fixHost completed within 28.055171625s
	I0817 21:46:40.001764  123518 start.go:83] releasing machines lock for "missing-upgrade-790957", held for 28.055216482s
	I0817 21:46:40.001829  123518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-790957
	I0817 21:46:40.027831  123518 ssh_runner.go:195] Run: cat /version.json
	I0817 21:46:40.027887  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:40.027896  123518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:46:40.027960  123518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-790957
	I0817 21:46:40.048898  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	I0817 21:46:40.060110  123518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32947 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/missing-upgrade-790957/id_rsa Username:docker}
	W0817 21:46:40.255409  123518 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0817 21:46:40.255508  123518 ssh_runner.go:195] Run: systemctl --version
	I0817 21:46:40.264843  123518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:46:40.273261  123518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0817 21:46:40.313273  123518 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0817 21:46:40.313347  123518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:46:40.345453  123518 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 21:46:40.345521  123518 start.go:466] detecting cgroup driver to use...
	I0817 21:46:40.345564  123518 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0817 21:46:40.345640  123518 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0817 21:46:40.359075  123518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0817 21:46:40.372188  123518 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:46:40.372285  123518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:46:40.388174  123518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:46:40.404841  123518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0817 21:46:40.419631  123518 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0817 21:46:40.419710  123518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:46:40.538959  123518 docker.go:212] disabling docker service ...
	I0817 21:46:40.539018  123518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:46:40.564532  123518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:46:40.579179  123518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:46:40.710238  123518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:46:40.842944  123518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:46:40.856564  123518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:46:40.874687  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.4.1"|' /etc/containerd/config.toml"
	I0817 21:46:40.886724  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0817 21:46:40.897688  123518 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0817 21:46:40.897796  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0817 21:46:40.908546  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:46:40.920192  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0817 21:46:40.934813  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:46:40.951016  123518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:46:40.961867  123518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0817 21:46:40.976300  123518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:46:40.986182  123518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:46:40.995446  123518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:46:41.119417  123518 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0817 21:46:41.217354  123518 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 21:46:41.217420  123518 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0817 21:46:41.221964  123518 start.go:534] Will wait 60s for crictl version
	I0817 21:46:41.222026  123518 ssh_runner.go:195] Run: which crictl
	I0817 21:46:41.225856  123518 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:46:41.262130  123518 retry.go:31] will retry after 14.688849796s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:46:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0817 21:46:41.033138  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:46:41.075609  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0817 21:46:41.104622  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 21:46:41.135952  125284 provision.go:86] duration metric: configureAuth took 560.147859ms
	I0817 21:46:41.136012  125284 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:46:41.136211  125284 config.go:182] Loaded profile config "kubernetes-upgrade-483730": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0-rc.1
	I0817 21:46:41.136242  125284 machine.go:91] provisioned docker machine in 3.946898615s
	I0817 21:46:41.136265  125284 start.go:300] post-start starting for "kubernetes-upgrade-483730" (driver="docker")
	I0817 21:46:41.136288  125284 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:46:41.136353  125284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:46:41.136414  125284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-483730
	I0817 21:46:41.159396  125284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32952 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/kubernetes-upgrade-483730/id_rsa Username:docker}
	I0817 21:46:41.259168  125284 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:46:41.264034  125284 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:46:41.264093  125284 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:46:41.264105  125284 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:46:41.264111  125284 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0817 21:46:41.264119  125284 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/addons for local assets ...
	I0817 21:46:41.264185  125284 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-2431/.minikube/files for local assets ...
	I0817 21:46:41.264282  125284 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem -> 77452.pem in /etc/ssl/certs
	I0817 21:46:41.264412  125284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:46:41.275513  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem --> /etc/ssl/certs/77452.pem (1708 bytes)
	I0817 21:46:41.304299  125284 start.go:303] post-start completed in 168.007186ms
	I0817 21:46:41.304406  125284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:46:41.304482  125284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-483730
	I0817 21:46:41.327138  125284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32952 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/kubernetes-upgrade-483730/id_rsa Username:docker}
	I0817 21:46:41.417333  125284 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:46:41.423268  125284 fix.go:56] fixHost completed within 4.978065095s
	I0817 21:46:41.423297  125284 start.go:83] releasing machines lock for "kubernetes-upgrade-483730", held for 4.978115694s
	I0817 21:46:41.423371  125284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-483730
	I0817 21:46:41.445067  125284 ssh_runner.go:195] Run: cat /version.json
	I0817 21:46:41.445121  125284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-483730
	I0817 21:46:41.445124  125284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:46:41.445189  125284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-483730
	I0817 21:46:41.467351  125284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32952 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/kubernetes-upgrade-483730/id_rsa Username:docker}
	I0817 21:46:41.480355  125284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32952 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/kubernetes-upgrade-483730/id_rsa Username:docker}
	I0817 21:46:41.567208  125284 ssh_runner.go:195] Run: systemctl --version
	I0817 21:46:41.746734  125284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:46:41.752028  125284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0817 21:46:41.773472  125284 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0817 21:46:41.773549  125284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:46:41.785457  125284 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0817 21:46:41.785477  125284 start.go:466] detecting cgroup driver to use...
	I0817 21:46:41.785506  125284 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0817 21:46:41.785551  125284 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0817 21:46:41.801900  125284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0817 21:46:41.816420  125284 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:46:41.816483  125284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:46:41.833181  125284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:46:41.846731  125284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:46:41.939486  125284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:46:42.034523  125284 docker.go:212] disabling docker service ...
	I0817 21:46:42.034604  125284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:46:42.050185  125284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:46:42.064387  125284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:46:42.159378  125284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:46:42.255557  125284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:46:42.269191  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:46:42.289414  125284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0817 21:46:42.301957  125284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0817 21:46:42.314064  125284 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0817 21:46:42.314162  125284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0817 21:46:42.327050  125284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:46:42.339409  125284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0817 21:46:42.351635  125284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0817 21:46:42.363832  125284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:46:42.375176  125284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0817 21:46:42.387142  125284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:46:42.398105  125284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:46:42.408329  125284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:46:42.505082  125284 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0817 21:46:42.584878  125284 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 21:46:42.584978  125284 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0817 21:46:42.590664  125284 start.go:534] Will wait 60s for crictl version
	I0817 21:46:42.590761  125284 ssh_runner.go:195] Run: which crictl
	I0817 21:46:42.595630  125284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:46:42.642747  125284 retry.go:31] will retry after 11.645099452s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:46:42Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 21:46:54.288107  125284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:46:54.326760  125284 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0817 21:46:54.326829  125284 ssh_runner.go:195] Run: containerd --version
	I0817 21:46:54.355008  125284 ssh_runner.go:195] Run: containerd --version
	I0817 21:46:54.384566  125284 out.go:177] * Preparing Kubernetes v1.28.0-rc.1 on containerd 1.6.21 ...
	I0817 21:46:54.386487  125284 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-483730 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:46:54.406376  125284 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0817 21:46:54.411017  125284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:46:54.423881  125284 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime containerd
	I0817 21:46:54.423945  125284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:46:54.461020  125284 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0-rc.1". assuming images are not preloaded.
	I0817 21:46:54.461085  125284 ssh_runner.go:195] Run: which lz4
	I0817 21:46:54.465530  125284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 21:46:54.469766  125284 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 21:46:54.469800  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (388372381 bytes)
	I0817 21:46:55.951201  123518 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:46:55.983968  123518 retry.go:31] will retry after 18.684623322s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:46:55Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0817 21:46:59.377782  125284 containerd.go:547] Took 4.912292 seconds to copy over tarball
	I0817 21:46:59.377871  125284 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 21:47:01.797666  125284 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.419768295s)
	I0817 21:47:01.797689  125284 containerd.go:554] Took 2.419874 seconds to extract the tarball
	I0817 21:47:01.797702  125284 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 21:47:01.849416  125284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:47:01.973659  125284 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0817 21:47:02.125289  125284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:47:02.189804  125284 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.0-rc.1 registry.k8s.io/kube-controller-manager:v1.28.0-rc.1 registry.k8s.io/kube-scheduler:v1.28.0-rc.1 registry.k8s.io/kube-proxy:v1.28.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 21:47:02.189875  125284 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:47:02.190089  125284 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 21:47:02.190207  125284 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 21:47:02.190290  125284 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 21:47:02.190385  125284 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 21:47:02.190458  125284 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0817 21:47:02.190532  125284 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0817 21:47:02.190606  125284 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 21:47:02.193298  125284 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0817 21:47:02.193338  125284 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 21:47:02.193398  125284 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 21:47:02.193436  125284 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:47:02.193707  125284 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 21:47:02.193710  125284 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 21:47:02.193738  125284 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0817 21:47:02.193303  125284 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 21:47:02.608068  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.28.0-rc.1"
	I0817 21:47:02.633747  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.28.0-rc.1"
	I0817 21:47:02.633867  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.28.0-rc.1"
	I0817 21:47:02.636839  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.28.0-rc.1"
	I0817 21:47:02.664079  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.10.1"
	I0817 21:47:02.669872  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.9-0"
	I0817 21:47:02.685132  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.9"
	W0817 21:47:02.764832  125284 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0817 21:47:02.764955  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0817 21:47:03.324961  125284 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.0-rc.1" does not exist at hash "39802b9ca605639e865a07534de11c6bd38a0b7c7c5a7cc14bba64be179e4d7d" in container runtime
	I0817 21:47:03.325009  125284 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 21:47:03.325055  125284 ssh_runner.go:195] Run: which crictl
	I0817 21:47:03.570926  125284 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.0-rc.1" does not exist at hash "cec03cfb725abb74e19c852a44ca2c56fdc0d9949c050f8358dca7396c3ed31f" in container runtime
	I0817 21:47:03.570967  125284 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 21:47:03.571016  125284 ssh_runner.go:195] Run: which crictl
	I0817 21:47:03.571075  125284 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.0-rc.1" does not exist at hash "b96a0b077bb7c2a3ba61ed371a459969532e7c4746241e057971597e99bb4ddf" in container runtime
	I0817 21:47:03.571092  125284 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 21:47:03.571114  125284 ssh_runner.go:195] Run: which crictl
	I0817 21:47:03.571176  125284 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.0-rc.1" does not exist at hash "0bdd75d4a78e34fbfa0720cd70b0d0db3379f9ad0985839fb7e24749843c2d4e" in container runtime
	I0817 21:47:03.571192  125284 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 21:47:03.571224  125284 ssh_runner.go:195] Run: which crictl
	I0817 21:47:03.621624  125284 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108" in container runtime
	I0817 21:47:03.621666  125284 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0817 21:47:03.621715  125284 ssh_runner.go:195] Run: which crictl
	I0817 21:47:03.621793  125284 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace" in container runtime
	I0817 21:47:03.621812  125284 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0817 21:47:03.621831  125284 ssh_runner.go:195] Run: which crictl
	I0817 21:47:03.621900  125284 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e" in container runtime
	I0817 21:47:03.621917  125284 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I0817 21:47:03.621936  125284 ssh_runner.go:195] Run: which crictl
	I0817 21:47:03.648266  125284 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0817 21:47:03.648346  125284 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:47:03.648365  125284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.0-rc.1
	I0817 21:47:03.648426  125284 ssh_runner.go:195] Run: which crictl
	I0817 21:47:03.648475  125284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.0-rc.1
	I0817 21:47:03.648545  125284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.0-rc.1
	I0817 21:47:03.648559  125284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.0-rc.1
	I0817 21:47:03.648631  125284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I0817 21:47:03.648676  125284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0817 21:47:03.648697  125284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0817 21:47:04.283079  125284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0817 21:47:04.283142  125284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.1
	I0817 21:47:04.283186  125284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:47:04.283224  125284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.1
	I0817 21:47:04.283249  125284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.1
	I0817 21:47:04.283281  125284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.1
	I0817 21:47:04.283286  125284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0817 21:47:04.283304  125284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0817 21:47:04.336714  125284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0817 21:47:04.336881  125284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0817 21:47:04.341277  125284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0817 21:47:04.341333  125284 containerd.go:269] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0817 21:47:04.341406  125284 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0817 21:47:04.824828  125284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0817 21:47:04.824926  125284 cache_images.go:92] LoadImages completed in 2.635096009s
	W0817 21:47:04.825037  125284 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16865-2431/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0: no such file or directory
	I0817 21:47:04.825112  125284 ssh_runner.go:195] Run: sudo crictl info
	I0817 21:47:04.865915  125284 cni.go:84] Creating CNI manager for ""
	I0817 21:47:04.865940  125284 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:47:04.865951  125284 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:47:04.865993  125284 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-483730 NodeName:kubernetes-upgrade-483730 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 21:47:04.866175  125284 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-483730"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:47:04.866283  125284 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-483730 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0-rc.1 ClusterName:kubernetes-upgrade-483730 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:47:04.866376  125284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0-rc.1
	I0817 21:47:04.876526  125284 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:47:04.876637  125284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 21:47:04.886720  125284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (402 bytes)
	I0817 21:47:04.906962  125284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 21:47:04.926703  125284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2119 bytes)
	I0817 21:47:04.946821  125284 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0817 21:47:04.951092  125284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:47:04.964186  125284 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kubernetes-upgrade-483730 for IP: 192.168.67.2
	I0817 21:47:04.964215  125284 certs.go:190] acquiring lock for shared ca certs: {Name:mk058988a603cd06c6d056488c4bdaf60bd886a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:47:04.964347  125284 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.key
	I0817 21:47:04.964395  125284 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.key
	I0817 21:47:04.964469  125284 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kubernetes-upgrade-483730/client.key
	I0817 21:47:04.964534  125284 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kubernetes-upgrade-483730/apiserver.key.c7fa3a9e
	I0817 21:47:04.964582  125284 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kubernetes-upgrade-483730/proxy-client.key
	I0817 21:47:04.964695  125284 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/7745.pem (1338 bytes)
	W0817 21:47:04.964729  125284 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/7745_empty.pem, impossibly tiny 0 bytes
	I0817 21:47:04.964742  125284 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca-key.pem (1675 bytes)
	I0817 21:47:04.964768  125284 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/ca.pem (1078 bytes)
	I0817 21:47:04.964796  125284 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:47:04.964828  125284 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/certs/home/jenkins/minikube-integration/16865-2431/.minikube/certs/key.pem (1675 bytes)
	I0817 21:47:04.964879  125284 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem (1708 bytes)
	I0817 21:47:04.965441  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kubernetes-upgrade-483730/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 21:47:04.993623  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kubernetes-upgrade-483730/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 21:47:05.024158  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kubernetes-upgrade-483730/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 21:47:05.054193  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kubernetes-upgrade-483730/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 21:47:05.082677  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:47:05.112507  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0817 21:47:05.140494  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:47:05.168765  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 21:47:05.198405  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/ssl/certs/77452.pem --> /usr/share/ca-certificates/77452.pem (1708 bytes)
	I0817 21:47:05.225692  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:47:05.253575  125284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-2431/.minikube/certs/7745.pem --> /usr/share/ca-certificates/7745.pem (1338 bytes)
	I0817 21:47:05.280612  125284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 21:47:05.300429  125284 ssh_runner.go:195] Run: openssl version
	I0817 21:47:05.307715  125284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77452.pem && ln -fs /usr/share/ca-certificates/77452.pem /etc/ssl/certs/77452.pem"
	I0817 21:47:05.319354  125284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77452.pem
	I0817 21:47:05.323750  125284 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:18 /usr/share/ca-certificates/77452.pem
	I0817 21:47:05.323819  125284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77452.pem
	I0817 21:47:05.332402  125284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77452.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 21:47:05.343083  125284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:47:05.354048  125284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:47:05.358787  125284 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:12 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:47:05.358851  125284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:47:05.367336  125284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:47:05.377885  125284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7745.pem && ln -fs /usr/share/ca-certificates/7745.pem /etc/ssl/certs/7745.pem"
	I0817 21:47:05.389363  125284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7745.pem
	I0817 21:47:05.393855  125284 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:18 /usr/share/ca-certificates/7745.pem
	I0817 21:47:05.393936  125284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7745.pem
	I0817 21:47:05.402135  125284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7745.pem /etc/ssl/certs/51391683.0"
	I0817 21:47:05.412818  125284 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:47:05.417012  125284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0817 21:47:05.425082  125284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0817 21:47:05.433368  125284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0817 21:47:05.441244  125284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0817 21:47:05.449210  125284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0817 21:47:05.457389  125284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0817 21:47:05.465561  125284 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-483730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:kubernetes-upgrade-483730 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:47:05.465647  125284 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 21:47:05.465719  125284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:47:05.505702  125284 cri.go:89] found id: ""
	I0817 21:47:05.505803  125284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 21:47:05.515622  125284 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0817 21:47:05.515684  125284 kubeadm.go:636] restartCluster start
	I0817 21:47:05.515764  125284 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0817 21:47:05.525690  125284 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 21:47:05.526294  125284 kubeconfig.go:135] verify returned: extract IP: "kubernetes-upgrade-483730" does not appear in /home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:47:05.526541  125284 kubeconfig.go:146] "kubernetes-upgrade-483730" context is missing from /home/jenkins/minikube-integration/16865-2431/kubeconfig - will repair!
	I0817 21:47:05.527205  125284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/kubeconfig: {Name:mkf341824bbe915f226637e75b19e0928287e2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:47:05.528012  125284 kapi.go:59] client config for kubernetes-upgrade-483730: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kubernetes-upgrade-483730/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kubernetes-upgrade-483730/client.key", CAFile:"/home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16ec6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:47:05.529120  125284 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 21:47:05.540307  125284 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-08-17 21:45:51.605664813 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-08-17 21:47:04.940225053 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.67.2
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /run/containerd/containerd.sock
	+  criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-483730"
	   kubeletExtraArgs:
	     node-ip: 192.168.67.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-483730
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.28.0-rc.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0817 21:47:05.540330  125284 kubeadm.go:1128] stopping kube-system containers ...
	I0817 21:47:05.540341  125284 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 21:47:05.540412  125284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:47:05.579044  125284 cri.go:89] found id: ""
	I0817 21:47:05.579148  125284 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0817 21:47:05.593328  125284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 21:47:05.603879  125284 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5707 Aug 17 21:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5743 Aug 17 21:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Aug 17 21:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5687 Aug 17 21:46 /etc/kubernetes/scheduler.conf
	
	I0817 21:47:05.603959  125284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0817 21:47:05.614390  125284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0817 21:47:05.625090  125284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0817 21:47:05.635353  125284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0817 21:47:05.645654  125284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 21:47:05.656265  125284 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 21:47:05.656327  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:47:05.715809  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:47:07.412075  125284 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.696228818s)
	I0817 21:47:07.412101  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:47:07.589885  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:47:07.660316  125284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 21:47:07.745499  125284 api_server.go:52] waiting for apiserver process to appear ...
	I0817 21:47:07.745567  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:07.761175  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:08.273397  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:08.772853  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:09.272899  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:09.772861  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:10.273746  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:10.773247  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:14.670729  123518 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:47:14.756584  123518 retry.go:31] will retry after 17.699083176s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0817 21:47:11.272883  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:11.772903  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:12.273557  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:12.772880  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:13.273592  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:13.772935  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:14.273219  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:14.773213  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:15.272916  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:15.773566  125284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:47:15.794226  125284 api_server.go:72] duration metric: took 8.048725301s to wait for apiserver process to appear ...
	I0817 21:47:15.794245  125284 api_server.go:88] waiting for apiserver healthz status ...
	I0817 21:47:15.794264  125284 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0817 21:47:20.794848  125284 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 21:47:20.794882  125284 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0817 21:47:25.795428  125284 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 21:47:26.296466  125284 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0817 21:47:32.456561  123518 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:47:32.488543  123518 out.go:177] 
	W0817 21:47:32.490370  123518 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:32Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0817 21:47:32.490385  123518 out.go:239] * 
	W0817 21:47:32.491297  123518 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 21:47:32.494203  123518 out.go:177] 
	
	* 
	* ==> container status <==
	* 
	* ==> containerd <==
	* -- Logs begin at Thu 2023-08-17 21:46:36 UTC, end at Thu 2023-08-17 21:47:33 UTC. --
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.213244383Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.213308939Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.213374997Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.213431505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.213532689Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.213646369Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.214194641Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.214295127Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.214385406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.214447715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.214509548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.214585378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.214686430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.214748829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.214809743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.214867670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.214929322Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.215026255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.215088753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.215151890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.215215355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.215572378Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.215628311Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 17 21:46:41 missing-upgrade-790957 systemd[1]: Started containerd container runtime.
	Aug 17 21:46:41 missing-upgrade-790957 containerd[629]: time="2023-08-17T21:46:41.216820231Z" level=info msg="containerd successfully booted in 0.047687s"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000711] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000958] FS-Cache: N-cookie d=000000001c519bd9{9p.inode} n=00000000e816f04d
	[  +0.001056] FS-Cache: N-key=[8] '9c385c0100000000'
	[  +0.002499] FS-Cache: Duplicate cookie detected
	[  +0.000727] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000958] FS-Cache: O-cookie d=000000001c519bd9{9p.inode} n=000000000650bc75
	[  +0.001039] FS-Cache: O-key=[8] '9c385c0100000000'
	[  +0.000711] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000952] FS-Cache: N-cookie d=000000001c519bd9{9p.inode} n=00000000dd57f7a2
	[  +0.001042] FS-Cache: N-key=[8] '9c385c0100000000'
	[  +2.912460] FS-Cache: Duplicate cookie detected
	[  +0.000699] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000961] FS-Cache: O-cookie d=000000001c519bd9{9p.inode} n=0000000082ea674a
	[  +0.001069] FS-Cache: O-key=[8] '9b385c0100000000'
	[  +0.000712] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000937] FS-Cache: N-cookie d=000000001c519bd9{9p.inode} n=00000000e816f04d
	[  +0.001048] FS-Cache: N-key=[8] '9b385c0100000000'
	[  +0.378950] FS-Cache: Duplicate cookie detected
	[  +0.000751] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000978] FS-Cache: O-cookie d=000000001c519bd9{9p.inode} n=00000000488aa4bd
	[  +0.001095] FS-Cache: O-key=[8] 'a3385c0100000000'
	[  +0.000697] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=000000001c519bd9{9p.inode} n=00000000cecce696
	[  +0.001165] FS-Cache: N-key=[8] 'a3385c0100000000'
	[Aug17 21:22] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> kernel <==
	*  21:47:34 up  1:29,  0 users,  load average: 1.97, 2.29, 1.70
	Linux missing-upgrade-790957 5.15.0-1041-aws #46~20.04.1-Ubuntu SMP Wed Jul 19 15:39:29 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-08-17 21:46:36 UTC, end at Thu 2023-08-17 21:47:34 UTC. --
	-- No entries --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 21:47:33.244118  127818 logs.go:281] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0817 21:47:33.275225  127818 logs.go:281] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0817 21:47:33.303163  127818 logs.go:281] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0817 21:47:33.331512  127818 logs.go:281] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0817 21:47:33.367185  127818 logs.go:281] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0817 21:47:33.395518  127818 logs.go:281] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0817 21:47:33.427997  127818 logs.go:281] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0817 21:47:33.457215  127818 logs.go:281] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0817 21:47:33.897206  127818 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-17T21:47:33Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2023-08-17T21:47:33Z\" level=fatal msg=\"listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E0817 21:47:34.304165  127818 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p missing-upgrade-790957 -n missing-upgrade-790957
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p missing-upgrade-790957 -n missing-upgrade-790957: exit status 2 (328.98613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "missing-upgrade-790957" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "missing-upgrade-790957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-790957
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-790957: (2.212486187s)
--- FAIL: TestMissingContainerUpgrade (221.40s)

                                                
                                    

Test pass (269/310)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 29.93
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.27.4/json-events 24.56
11 TestDownloadOnly/v1.27.4/preload-exists 0
15 TestDownloadOnly/v1.27.4/LogsDuration 0.07
17 TestDownloadOnly/v1.28.0-rc.1/json-events 27.81
18 TestDownloadOnly/v1.28.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.28.0-rc.1/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.23
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
26 TestBinaryMirror 0.6
29 TestAddons/Setup 131.47
31 TestAddons/parallel/Registry 126.81
33 TestAddons/parallel/InspektorGadget 10.8
34 TestAddons/parallel/MetricsServer 5.86
37 TestAddons/parallel/CSI 48.45
38 TestAddons/parallel/Headlamp 26.73
39 TestAddons/parallel/CloudSpanner 5.74
42 TestAddons/serial/GCPAuth/Namespaces 0.19
43 TestAddons/StoppedEnableDisable 12.3
44 TestCertOptions 39.8
45 TestCertExpiration 232.29
47 TestForceSystemdFlag 41.2
48 TestForceSystemdEnv 35.8
49 TestDockerEnvContainerd 46.95
54 TestErrorSpam/setup 31.64
55 TestErrorSpam/start 0.87
56 TestErrorSpam/status 1.07
57 TestErrorSpam/pause 1.78
58 TestErrorSpam/unpause 1.99
59 TestErrorSpam/stop 1.54
62 TestFunctional/serial/CopySyncFile 0
63 TestFunctional/serial/StartWithProxy 61.58
64 TestFunctional/serial/AuditLog 0
65 TestFunctional/serial/SoftStart 21.02
66 TestFunctional/serial/KubeContext 0.06
67 TestFunctional/serial/KubectlGetPods 0.11
70 TestFunctional/serial/CacheCmd/cache/add_remote 4.19
71 TestFunctional/serial/CacheCmd/cache/add_local 1.43
72 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
73 TestFunctional/serial/CacheCmd/cache/list 0.06
74 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
75 TestFunctional/serial/CacheCmd/cache/cache_reload 2.29
76 TestFunctional/serial/CacheCmd/cache/delete 0.12
77 TestFunctional/serial/MinikubeKubectlCmd 0.19
78 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/LogsCmd 1.54
82 TestFunctional/serial/LogsFileCmd 1.58
83 TestFunctional/serial/InvalidService 5.9
85 TestFunctional/parallel/ConfigCmd 0.47
86 TestFunctional/parallel/DashboardCmd 9.51
87 TestFunctional/parallel/DryRun 0.65
88 TestFunctional/parallel/InternationalLanguage 0.32
89 TestFunctional/parallel/StatusCmd 1.29
93 TestFunctional/parallel/ServiceCmdConnect 7.77
94 TestFunctional/parallel/AddonsCmd 0.17
95 TestFunctional/parallel/PersistentVolumeClaim 25.48
97 TestFunctional/parallel/SSHCmd 0.73
98 TestFunctional/parallel/CpCmd 1.54
100 TestFunctional/parallel/FileSync 0.39
101 TestFunctional/parallel/CertSync 2.14
105 TestFunctional/parallel/NodeLabels 0.09
107 TestFunctional/parallel/NonActiveRuntimeDisabled 0.79
109 TestFunctional/parallel/License 0.34
110 TestFunctional/parallel/Version/short 0.07
111 TestFunctional/parallel/Version/components 1.42
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
116 TestFunctional/parallel/ImageCommands/ImageBuild 2.98
117 TestFunctional/parallel/ImageCommands/Setup 1.79
118 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
119 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
120 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
122 TestFunctional/parallel/ServiceCmd/DeployApp 9.29
125 TestFunctional/parallel/ServiceCmd/List 0.42
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
128 TestFunctional/parallel/ServiceCmd/Format 0.5
129 TestFunctional/parallel/ServiceCmd/URL 0.54
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.72
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.64
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
146 TestFunctional/parallel/ProfileCmd/profile_list 0.42
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
148 TestFunctional/parallel/MountCmd/any-port 7.79
149 TestFunctional/parallel/MountCmd/specific-port 2.48
150 TestFunctional/parallel/MountCmd/VerifyCleanup 2.63
151 TestFunctional/delete_addon-resizer_images 0.08
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestIngressAddonLegacy/StartLegacyK8sCluster 87.75
159 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.02
160 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.7
164 TestJSONOutput/start/Command 95.82
165 TestJSONOutput/start/Audit 0
167 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/pause/Command 0.8
171 TestJSONOutput/pause/Audit 0
173 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/unpause/Command 0.74
177 TestJSONOutput/unpause/Audit 0
179 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/stop/Command 5.88
183 TestJSONOutput/stop/Audit 0
185 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
187 TestErrorJSONOutput 0.23
189 TestKicCustomNetwork/create_custom_network 41.57
190 TestKicCustomNetwork/use_default_bridge_network 33.76
191 TestKicExistingNetwork 32.09
192 TestKicCustomSubnet 37.94
193 TestKicStaticIP 34.56
194 TestMainNoArgs 0.05
195 TestMinikubeProfile 78.28
198 TestMountStart/serial/StartWithMountFirst 8.94
199 TestMountStart/serial/VerifyMountFirst 0.27
200 TestMountStart/serial/StartWithMountSecond 6.59
201 TestMountStart/serial/VerifyMountSecond 0.27
202 TestMountStart/serial/DeleteFirst 1.65
203 TestMountStart/serial/VerifyMountPostDelete 0.28
204 TestMountStart/serial/Stop 1.22
205 TestMountStart/serial/RestartStopped 7.96
206 TestMountStart/serial/VerifyMountPostStop 0.28
209 TestMultiNode/serial/FreshStart2Nodes 105.54
210 TestMultiNode/serial/DeployApp2Nodes 4.8
211 TestMultiNode/serial/PingHostFrom2Pods 1.18
212 TestMultiNode/serial/AddNode 19.12
213 TestMultiNode/serial/ProfileList 0.36
214 TestMultiNode/serial/CopyFile 10.65
215 TestMultiNode/serial/StopNode 2.32
216 TestMultiNode/serial/StartAfterStop 12.72
217 TestMultiNode/serial/RestartKeepsNodes 147.27
218 TestMultiNode/serial/DeleteNode 5.09
219 TestMultiNode/serial/StopMultiNode 24.09
220 TestMultiNode/serial/RestartMultiNode 97.08
221 TestMultiNode/serial/ValidateNameConflict 36.16
226 TestPreload 167.5
228 TestScheduledStopUnix 118.16
231 TestInsufficientStorage 10.9
232 TestRunningBinaryUpgrade 122.19
234 TestKubernetesUpgrade 415.27
237 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
238 TestNoKubernetes/serial/StartWithK8s 49.75
239 TestNoKubernetes/serial/StartWithStopK8s 22.47
240 TestNoKubernetes/serial/Start 7.43
241 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
242 TestNoKubernetes/serial/ProfileList 0.88
243 TestNoKubernetes/serial/Stop 1.22
244 TestNoKubernetes/serial/StartNoArgs 6.65
245 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.42
246 TestStoppedBinaryUpgrade/Setup 1.39
247 TestStoppedBinaryUpgrade/Upgrade 138.22
248 TestStoppedBinaryUpgrade/MinikubeLogs 1.31
257 TestPause/serial/Start 107.37
265 TestNetworkPlugins/group/false 4.66
269 TestPause/serial/SecondStartNoReconfiguration 17.36
270 TestPause/serial/Pause 0.94
271 TestPause/serial/VerifyStatus 0.46
272 TestPause/serial/Unpause 0.94
273 TestPause/serial/PauseAgain 1.13
274 TestPause/serial/DeletePaused 2.59
275 TestPause/serial/VerifyDeletedResources 0.45
277 TestStartStop/group/old-k8s-version/serial/FirstStart 121.5
278 TestStartStop/group/old-k8s-version/serial/DeployApp 8.57
279 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.13
280 TestStartStop/group/old-k8s-version/serial/Stop 12.14
281 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
282 TestStartStop/group/old-k8s-version/serial/SecondStart 655.45
284 TestStartStop/group/no-preload/serial/FirstStart 93.99
285 TestStartStop/group/no-preload/serial/DeployApp 9.55
286 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.22
287 TestStartStop/group/no-preload/serial/Stop 12.1
288 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
289 TestStartStop/group/no-preload/serial/SecondStart 352.08
290 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.03
291 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
292 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.37
293 TestStartStop/group/no-preload/serial/Pause 3.35
295 TestStartStop/group/embed-certs/serial/FirstStart 89.97
296 TestStartStop/group/embed-certs/serial/DeployApp 9.6
297 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.51
298 TestStartStop/group/embed-certs/serial/Stop 12.13
299 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
300 TestStartStop/group/embed-certs/serial/SecondStart 342.47
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.17
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.54
304 TestStartStop/group/old-k8s-version/serial/Pause 4.37
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.91
307 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.53
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.26
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.13
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 354.97
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.03
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
314 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
315 TestStartStop/group/embed-certs/serial/Pause 3.29
317 TestStartStop/group/newest-cni/serial/FirstStart 43.71
318 TestStartStop/group/newest-cni/serial/DeployApp 0
319 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.3
320 TestStartStop/group/newest-cni/serial/Stop 1.26
321 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
322 TestStartStop/group/newest-cni/serial/SecondStart 30.24
323 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
324 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
325 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
326 TestStartStop/group/newest-cni/serial/Pause 3.22
327 TestNetworkPlugins/group/auto/Start 100.6
328 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.03
329 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
330 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
331 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.28
332 TestNetworkPlugins/group/kindnet/Start 90.79
333 TestNetworkPlugins/group/auto/KubeletFlags 0.56
334 TestNetworkPlugins/group/auto/NetCatPod 10.59
335 TestNetworkPlugins/group/auto/DNS 0.21
336 TestNetworkPlugins/group/auto/Localhost 0.19
337 TestNetworkPlugins/group/auto/HairPin 0.23
338 TestNetworkPlugins/group/calico/Start 83.45
339 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
340 TestNetworkPlugins/group/kindnet/KubeletFlags 0.5
341 TestNetworkPlugins/group/kindnet/NetCatPod 10.6
342 TestNetworkPlugins/group/kindnet/DNS 0.23
343 TestNetworkPlugins/group/kindnet/Localhost 0.19
344 TestNetworkPlugins/group/kindnet/HairPin 0.19
345 TestNetworkPlugins/group/custom-flannel/Start 65.39
346 TestNetworkPlugins/group/calico/ControllerPod 5.06
347 TestNetworkPlugins/group/calico/KubeletFlags 0.39
348 TestNetworkPlugins/group/calico/NetCatPod 11.6
349 TestNetworkPlugins/group/calico/DNS 0.3
350 TestNetworkPlugins/group/calico/Localhost 0.29
351 TestNetworkPlugins/group/calico/HairPin 0.28
352 TestNetworkPlugins/group/enable-default-cni/Start 45.41
353 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
354 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.51
355 TestNetworkPlugins/group/custom-flannel/DNS 0.27
356 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
357 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
358 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.47
359 TestNetworkPlugins/group/flannel/Start 58.32
360 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.58
361 TestNetworkPlugins/group/enable-default-cni/DNS 33.2
362 TestNetworkPlugins/group/enable-default-cni/Localhost 0.34
363 TestNetworkPlugins/group/enable-default-cni/HairPin 0.35
364 TestNetworkPlugins/group/flannel/ControllerPod 5.03
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
366 TestNetworkPlugins/group/flannel/NetCatPod 9.54
367 TestNetworkPlugins/group/bridge/Start 48.33
368 TestNetworkPlugins/group/flannel/DNS 0.58
369 TestNetworkPlugins/group/flannel/Localhost 0.37
370 TestNetworkPlugins/group/flannel/HairPin 0.25
371 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
372 TestNetworkPlugins/group/bridge/NetCatPod 9.34
373 TestNetworkPlugins/group/bridge/DNS 34.19
374 TestNetworkPlugins/group/bridge/Localhost 0.19
375 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (29.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-481885 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-481885 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (29.924783879s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (29.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-481885
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-481885: exit status 85 (70.557491ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-481885 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-481885        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:10:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:10:48.999305    7751 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:10:48.999550    7751 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:48.999576    7751 out.go:309] Setting ErrFile to fd 2...
	I0817 21:10:48.999594    7751 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:48.999870    7751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	W0817 21:10:49.000026    7751 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16865-2431/.minikube/config/config.json: open /home/jenkins/minikube-integration/16865-2431/.minikube/config/config.json: no such file or directory
	I0817 21:10:49.000427    7751 out.go:303] Setting JSON to true
	I0817 21:10:49.001273    7751 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3188,"bootTime":1692303461,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0817 21:10:49.001350    7751 start.go:138] virtualization:  
	I0817 21:10:49.004698    7751 out.go:97] [download-only-481885] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0817 21:10:49.006985    7751 out.go:169] MINIKUBE_LOCATION=16865
	W0817 21:10:49.004906    7751 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball: no such file or directory
	I0817 21:10:49.004990    7751 notify.go:220] Checking for updates...
	I0817 21:10:49.011276    7751 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:10:49.013266    7751 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:10:49.014994    7751 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	I0817 21:10:49.016765    7751 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0817 21:10:49.020614    7751 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0817 21:10:49.020846    7751 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:10:49.050470    7751 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:10:49.050547    7751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:10:49.466969    7751 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-08-17 21:10:49.457311204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:10:49.467071    7751 docker.go:294] overlay module found
	I0817 21:10:49.469329    7751 out.go:97] Using the docker driver based on user configuration
	I0817 21:10:49.469366    7751 start.go:298] selected driver: docker
	I0817 21:10:49.469375    7751 start.go:902] validating driver "docker" against <nil>
	I0817 21:10:49.469482    7751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:10:49.543895    7751 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-08-17 21:10:49.534909311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:10:49.544042    7751 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0817 21:10:49.544301    7751 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0817 21:10:49.544469    7751 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0817 21:10:49.546981    7751 out.go:169] Using Docker driver with root privileges
	I0817 21:10:49.549447    7751 cni.go:84] Creating CNI manager for ""
	I0817 21:10:49.549465    7751 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:10:49.549480    7751 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 21:10:49.549492    7751 start_flags.go:319] config:
	{Name:download-only-481885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-481885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:10:49.551656    7751 out.go:97] Starting control plane node download-only-481885 in cluster download-only-481885
	I0817 21:10:49.551672    7751 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0817 21:10:49.553692    7751 out.go:97] Pulling base image ...
	I0817 21:10:49.553711    7751 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0817 21:10:49.553855    7751 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:10:49.571559    7751 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0817 21:10:49.571750    7751 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0817 21:10:49.571846    7751 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0817 21:10:49.636090    7751 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0817 21:10:49.636113    7751 cache.go:57] Caching tarball of preloaded images
	I0817 21:10:49.636264    7751 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0817 21:10:49.638607    7751 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0817 21:10:49.638637    7751 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:10:49.775504    7751 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0817 21:10:57.083532    7751 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0817 21:10:59.548784    7751 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:10:59.548884    7751 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:11:00.571184    7751 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0817 21:11:00.571526    7751 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/download-only-481885/config.json ...
	I0817 21:11:00.571559    7751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/download-only-481885/config.json: {Name:mk6a26e9dee7ff146db24c2a73ff0e20807775f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:00.571741    7751 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0817 21:11:00.571941    7751 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/16865-2431/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-481885"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/json-events (24.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-481885 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-481885 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (24.560196241s)
--- PASS: TestDownloadOnly/v1.27.4/json-events (24.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/preload-exists
--- PASS: TestDownloadOnly/v1.27.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-481885
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-481885: exit status 85 (71.653625ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-481885 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-481885        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-481885 | jenkins | v1.31.2 | 17 Aug 23 21:11 UTC |          |
	|         | -p download-only-481885        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:11:18
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:11:18.985029    7828 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:11:18.985253    7828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:11:18.985277    7828 out.go:309] Setting ErrFile to fd 2...
	I0817 21:11:18.985299    7828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:11:18.985570    7828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	W0817 21:11:18.985730    7828 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16865-2431/.minikube/config/config.json: open /home/jenkins/minikube-integration/16865-2431/.minikube/config/config.json: no such file or directory
	I0817 21:11:18.985995    7828 out.go:303] Setting JSON to true
	I0817 21:11:18.986747    7828 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3218,"bootTime":1692303461,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0817 21:11:18.986831    7828 start.go:138] virtualization:  
	I0817 21:11:18.989404    7828 out.go:97] [download-only-481885] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0817 21:11:18.991736    7828 out.go:169] MINIKUBE_LOCATION=16865
	I0817 21:11:18.989731    7828 notify.go:220] Checking for updates...
	I0817 21:11:18.994130    7828 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:11:18.996137    7828 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:11:18.998680    7828 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	I0817 21:11:19.000537    7828 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0817 21:11:19.004520    7828 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0817 21:11:19.005026    7828 config.go:182] Loaded profile config "download-only-481885": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0817 21:11:19.005151    7828 start.go:810] api.Load failed for download-only-481885: filestore "download-only-481885": Docker machine "download-only-481885" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 21:11:19.005251    7828 driver.go:373] Setting default libvirt URI to qemu:///system
	W0817 21:11:19.005278    7828 start.go:810] api.Load failed for download-only-481885: filestore "download-only-481885": Docker machine "download-only-481885" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 21:11:19.032542    7828 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:11:19.032620    7828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:11:19.128778    7828 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-08-17 21:11:19.118679374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:11:19.128894    7828 docker.go:294] overlay module found
	I0817 21:11:19.131348    7828 out.go:97] Using the docker driver based on existing profile
	I0817 21:11:19.131375    7828 start.go:298] selected driver: docker
	I0817 21:11:19.131382    7828 start.go:902] validating driver "docker" against &{Name:download-only-481885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-481885 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:11:19.131565    7828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:11:19.211640    7828 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-08-17 21:11:19.20153137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:11:19.212078    7828 cni.go:84] Creating CNI manager for ""
	I0817 21:11:19.212094    7828 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:11:19.212104    7828 start_flags.go:319] config:
	{Name:download-only-481885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-481885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:11:19.214445    7828 out.go:97] Starting control plane node download-only-481885 in cluster download-only-481885
	I0817 21:11:19.214463    7828 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0817 21:11:19.216808    7828 out.go:97] Pulling base image ...
	I0817 21:11:19.216840    7828 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime containerd
	I0817 21:11:19.216990    7828 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:11:19.234256    7828 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0817 21:11:19.234360    7828 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0817 21:11:19.234381    7828 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0817 21:11:19.234388    7828 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0817 21:11:19.234398    7828 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0817 21:11:19.285804    7828 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4
	I0817 21:11:19.285828    7828 cache.go:57] Caching tarball of preloaded images
	I0817 21:11:19.285990    7828 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime containerd
	I0817 21:11:19.288277    7828 out.go:97] Downloading Kubernetes v1.27.4 preload ...
	I0817 21:11:19.288310    7828 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:11:19.402610    7828 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cf15c593d5924282ae979f284eb668e1 -> /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4
	I0817 21:11:26.459108    7828 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:11:26.459209    7828 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:11:27.262134    7828 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on containerd
	I0817 21:11:27.262267    7828 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/download-only-481885/config.json ...
	I0817 21:11:27.262482    7828 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime containerd
	I0817 21:11:27.262700    7828 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/16865-2431/.minikube/cache/linux/arm64/v1.27.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-481885"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/json-events (27.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-481885 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-481885 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (27.811931809s)
--- PASS: TestDownloadOnly/v1.28.0-rc.1/json-events (27.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-481885
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-481885: exit status 85 (72.010694ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-481885 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-481885           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-481885 | jenkins | v1.31.2 | 17 Aug 23 21:11 UTC |          |
	|         | -p download-only-481885           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-481885 | jenkins | v1.31.2 | 17 Aug 23 21:11 UTC |          |
	|         | -p download-only-481885           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:11:43
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:11:43.621033    7901 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:11:43.621188    7901 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:11:43.621197    7901 out.go:309] Setting ErrFile to fd 2...
	I0817 21:11:43.621203    7901 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:11:43.621443    7901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	W0817 21:11:43.621558    7901 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16865-2431/.minikube/config/config.json: open /home/jenkins/minikube-integration/16865-2431/.minikube/config/config.json: no such file or directory
	I0817 21:11:43.621770    7901 out.go:303] Setting JSON to true
	I0817 21:11:43.622475    7901 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3242,"bootTime":1692303461,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0817 21:11:43.622547    7901 start.go:138] virtualization:  
	I0817 21:11:43.625048    7901 out.go:97] [download-only-481885] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0817 21:11:43.627012    7901 out.go:169] MINIKUBE_LOCATION=16865
	I0817 21:11:43.625405    7901 notify.go:220] Checking for updates...
	I0817 21:11:43.630504    7901 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:11:43.632953    7901 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:11:43.635083    7901 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	I0817 21:11:43.637102    7901 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0817 21:11:43.640710    7901 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0817 21:11:43.641192    7901 config.go:182] Loaded profile config "download-only-481885": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	W0817 21:11:43.641259    7901 start.go:810] api.Load failed for download-only-481885: filestore "download-only-481885": Docker machine "download-only-481885" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 21:11:43.641350    7901 driver.go:373] Setting default libvirt URI to qemu:///system
	W0817 21:11:43.641377    7901 start.go:810] api.Load failed for download-only-481885: filestore "download-only-481885": Docker machine "download-only-481885" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 21:11:43.665186    7901 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:11:43.665265    7901 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:11:43.756616    7901 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-08-17 21:11:43.746953946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:11:43.756749    7901 docker.go:294] overlay module found
	I0817 21:11:43.758676    7901 out.go:97] Using the docker driver based on existing profile
	I0817 21:11:43.758710    7901 start.go:298] selected driver: docker
	I0817 21:11:43.758719    7901 start.go:902] validating driver "docker" against &{Name:download-only-481885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-481885 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:11:43.758905    7901 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:11:43.834084    7901 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-08-17 21:11:43.824668478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:11:43.834518    7901 cni.go:84] Creating CNI manager for ""
	I0817 21:11:43.834531    7901 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0817 21:11:43.834540    7901 start_flags.go:319] config:
	{Name:download-only-481885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:download-only-481885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:11:43.836772    7901 out.go:97] Starting control plane node download-only-481885 in cluster download-only-481885
	I0817 21:11:43.836791    7901 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0817 21:11:43.838600    7901 out.go:97] Pulling base image ...
	I0817 21:11:43.838695    7901 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime containerd
	I0817 21:11:43.838838    7901 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:11:43.856343    7901 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0817 21:11:43.856452    7901 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0817 21:11:43.856468    7901 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0817 21:11:43.856473    7901 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0817 21:11:43.856480    7901 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0817 21:11:43.912967    7901 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.1/preloaded-images-k8s-v18-v1.28.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I0817 21:11:43.913000    7901 cache.go:57] Caching tarball of preloaded images
	I0817 21:11:43.913145    7901 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime containerd
	I0817 21:11:43.915528    7901 out.go:97] Downloading Kubernetes v1.28.0-rc.1 preload ...
	I0817 21:11:43.915554    7901 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.0-rc.1-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:11:44.035779    7901 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.1/preloaded-images-k8s-v18-v1.28.0-rc.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:32ba262a14fdb0229e69ee6a78dbb9b1 -> /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I0817 21:11:54.571826    7901 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.0-rc.1-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:11:54.571932    7901 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16865-2431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-containerd-overlay2-arm64.tar.lz4 ...
	I0817 21:11:55.420208    7901 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.1 on containerd
	I0817 21:11:55.420351    7901 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/download-only-481885/config.json ...
	I0817 21:11:55.420576    7901 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime containerd
	I0817 21:11:55.420765    7901 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0-rc.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/16865-2431/.minikube/cache/linux/arm64/v1.28.0-rc.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-481885"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-481885
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-579967 --alsologtostderr --binary-mirror http://127.0.0.1:39617 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-579967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-579967
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/Setup (131.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-028423 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-028423 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m11.470579891s)
--- PASS: TestAddons/Setup (131.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (126.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 47.984912ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-jjtqn" [cd975e55-332b-4a73-a6cf-587df43db3a2] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.023994287s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7lbds" [ca601be3-5325-4d4e-9c0c-84985c85a22f] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.027057105s
addons_test.go:316: (dbg) Run:  kubectl --context addons-028423 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-028423 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-028423 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (18.459916691s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-028423 ip
2023/08/17 21:14:53 [DEBUG] GET http://192.168.49.2:5000
2023/08/17 21:14:53 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:14:53 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/08/17 21:14:54 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:14:54 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/08/17 21:14:56 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:14:56 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-028423 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (126.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rs47h" [b6baad72-ada3-4472-84b9-836bcd420d67] Running
2023/08/17 21:15:56 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013810837s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-028423
2023/08/17 21:15:58 [DEBUG] GET http://192.168.49.2:5000
2023/08/17 21:15:58 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:58 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/08/17 21:15:59 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:59 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/08/17 21:16:01 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:16:01 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-028423: (5.789926299s)
--- PASS: TestAddons/parallel/InspektorGadget (10.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 5.021626ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7746886d4f-5842c" [a073cb3c-d435-43d6-8c02-49700ae1503f] Running
2023/08/17 21:15:48 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:48 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.016729564s
addons_test.go:391: (dbg) Run:  kubectl --context addons-028423 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-028423 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 11.146493ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-028423 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-028423 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fabc0395-f484-471f-b020-20990acc43b2] Pending
helpers_test.go:344: "task-pv-pod" [fabc0395-f484-471f-b020-20990acc43b2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
2023/08/17 21:15:00 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:00 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
helpers_test.go:344: "task-pv-pod" [fabc0395-f484-471f-b020-20990acc43b2] Running
2023/08/17 21:15:08 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:08 [DEBUG] GET http://192.168.49.2:5000
2023/08/17 21:15:08 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:08 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.025224425s
addons_test.go:560: (dbg) Run:  kubectl --context addons-028423 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-028423 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-028423 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
2023/08/17 21:15:09 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:09 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
helpers_test.go:419: (dbg) Run:  kubectl --context addons-028423 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-028423 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-028423 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-028423 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2023/08/17 21:15:11 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:11 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2023/08/17 21:15:15 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:15 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2023/08/17 21:15:23 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2023/08/17 21:15:24 [DEBUG] GET http://192.168.49.2:5000
2023/08/17 21:15:24 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:24 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2023/08/17 21:15:25 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:25 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2023/08/17 21:15:27 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:27 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-028423 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-028423 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f0903e95-8896-46a3-84ef-fffced5e907f] Pending
helpers_test.go:344: "task-pv-pod-restore" [f0903e95-8896-46a3-84ef-fffced5e907f] Running
2023/08/17 21:15:31 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:31 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.031848149s
addons_test.go:602: (dbg) Run:  kubectl --context addons-028423 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-028423 delete pod task-pv-pod-restore: (1.03604334s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-028423 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-028423 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-028423 addons disable csi-hostpath-driver --alsologtostderr -v=1
2023/08/17 21:15:39 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:41 [DEBUG] GET http://192.168.49.2:5000
2023/08/17 21:15:41 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:41 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/08/17 21:15:42 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:42 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/08/17 21:15:44 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/08/17 21:15:44 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-028423 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.804576244s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-028423 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.45s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (26.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-028423 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-028423 --alsologtostderr -v=1: (1.694911558s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5c78f74d8d-trhmd" [8a0aaccf-279e-4e01-b09a-c343a37419d8] Pending
helpers_test.go:344: "headlamp-5c78f74d8d-trhmd" [8a0aaccf-279e-4e01-b09a-c343a37419d8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5c78f74d8d-trhmd" [8a0aaccf-279e-4e01-b09a-c343a37419d8] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 25.029452245s
--- PASS: TestAddons/parallel/Headlamp (26.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-d67854dc9-4wg5q" [60801b92-9bb3-4f48-8801-c0e8c36d0c4c] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012548222s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-028423
--- PASS: TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-028423 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-028423 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-028423
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-028423: (12.019165471s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-028423
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-028423
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-028423
--- PASS: TestAddons/StoppedEnableDisable (12.30s)

                                                
                                    
x
+
TestCertOptions (39.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-411899 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0817 21:54:24.657324    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-411899 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.723973459s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-411899 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-411899 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-411899 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-411899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-411899
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-411899: (2.349716264s)
--- PASS: TestCertOptions (39.80s)

                                                
                                    
x
+
TestCertExpiration (232.29s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-608606 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-608606 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (34.718162573s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-608606 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-608606 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (14.325518788s)
helpers_test.go:175: Cleaning up "cert-expiration-608606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-608606
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-608606: (3.249407659s)
--- PASS: TestCertExpiration (232.29s)

                                                
                                    
x
+
TestForceSystemdFlag (41.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-067400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0817 21:52:27.705932    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-067400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.88577187s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-067400 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-067400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-067400
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-067400: (1.973540284s)
--- PASS: TestForceSystemdFlag (41.20s)

                                                
                                    
x
+
TestForceSystemdEnv (35.8s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-097278 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0817 21:53:40.497866    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-097278 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.451299871s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-097278 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-097278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-097278
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-097278: (2.049569768s)
--- PASS: TestForceSystemdEnv (35.80s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.95s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-293090 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-293090 --driver=docker  --container-runtime=containerd: (30.76236864s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-293090"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-293090": (1.200426195s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-NIUTwky88Ih8/agent.24137" SSH_AGENT_PID="24138" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-NIUTwky88Ih8/agent.24137" SSH_AGENT_PID="24138" DOCKER_HOST=ssh://docker@127.0.0.1:32777 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-NIUTwky88Ih8/agent.24137" SSH_AGENT_PID="24138" DOCKER_HOST=ssh://docker@127.0.0.1:32777 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.67147232s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-NIUTwky88Ih8/agent.24137" SSH_AGENT_PID="24138" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-293090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-293090
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-293090: (2.28805219s)
--- PASS: TestDockerEnvContainerd (46.95s)

                                                
                                    
x
+
TestErrorSpam/setup (31.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-112266 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-112266 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-112266 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-112266 --driver=docker  --container-runtime=containerd: (31.638086082s)
--- PASS: TestErrorSpam/setup (31.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 start --dry-run
--- PASS: TestErrorSpam/start (0.87s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 pause
--- PASS: TestErrorSpam/pause (1.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 unpause
--- PASS: TestErrorSpam/unpause (1.99s)

                                                
                                    
x
+
TestErrorSpam/stop (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 stop: (1.326460954s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-112266 --log_dir /tmp/nospam-112266 stop
--- PASS: TestErrorSpam/stop (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16865-2431/.minikube/files/etc/test/nested/copy/7745/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.58s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-545557 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0817 21:19:24.659267    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:19:24.668350    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:19:24.678688    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:19:24.698942    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:19:24.739245    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:19:24.819545    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:19:24.979957    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:19:25.300481    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:19:25.940962    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:19:27.221195    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:19:29.781416    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:19:34.901876    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:19:45.142754    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-545557 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m1.573854175s)
--- PASS: TestFunctional/serial/StartWithProxy (61.58s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (21.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-545557 --alsologtostderr -v=8
E0817 21:20:05.622980    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-545557 --alsologtostderr -v=8: (21.018921341s)
functional_test.go:659: soft start took 21.023460597s for "functional-545557" cluster.
--- PASS: TestFunctional/serial/SoftStart (21.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-545557 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-545557 cache add registry.k8s.io/pause:3.1: (1.52882736s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-545557 cache add registry.k8s.io/pause:3.3: (1.39007044s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-545557 cache add registry.k8s.io/pause:latest: (1.265984554s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-545557 /tmp/TestFunctionalserialCacheCmdcacheadd_local2912322241/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 cache add minikube-local-cache-test:functional-545557
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 cache delete minikube-local-cache-test:functional-545557
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-545557
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-545557 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (322.326035ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-545557 cache reload: (1.286418888s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 kubectl -- --context functional-545557 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-545557 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-545557 logs: (1.537424512s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 logs --file /tmp/TestFunctionalserialLogsFileCmd2428191479/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-545557 logs --file /tmp/TestFunctionalserialLogsFileCmd2428191479/001/logs.txt: (1.575093733s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.9s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-545557 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-545557
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-545557: exit status 115 (855.281284ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31297 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-545557 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-545557 delete -f testdata/invalidsvc.yaml: (1.240139866s)
--- PASS: TestFunctional/serial/InvalidService (5.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-545557 config get cpus: exit status 14 (86.198117ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-545557 config get cpus: exit status 14 (69.513191ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-545557 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-545557 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 39279: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-545557 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-545557 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (281.2583ms)

                                                
                                                
-- stdout --
	* [functional-545557] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:21:47.068951   38617 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:21:47.069116   38617 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:21:47.069135   38617 out.go:309] Setting ErrFile to fd 2...
	I0817 21:21:47.069152   38617 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:21:47.069416   38617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	I0817 21:21:47.069867   38617 out.go:303] Setting JSON to false
	I0817 21:21:47.070911   38617 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3846,"bootTime":1692303461,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0817 21:21:47.071057   38617 start.go:138] virtualization:  
	I0817 21:21:47.076081   38617 out.go:177] * [functional-545557] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0817 21:21:47.078036   38617 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:21:47.080190   38617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:21:47.078215   38617 notify.go:220] Checking for updates...
	I0817 21:21:47.084733   38617 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:21:47.086925   38617 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	I0817 21:21:47.088805   38617 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 21:21:47.091012   38617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:21:47.093602   38617 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:21:47.094070   38617 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:21:47.137613   38617 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:21:47.137766   38617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:21:47.281214   38617 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-08-17 21:21:47.27076517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:21:47.281364   38617 docker.go:294] overlay module found
	I0817 21:21:47.283529   38617 out.go:177] * Using the docker driver based on existing profile
	I0817 21:21:47.285409   38617 start.go:298] selected driver: docker
	I0817 21:21:47.285426   38617 start.go:902] validating driver "docker" against &{Name:functional-545557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-545557 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:21:47.285537   38617 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:21:47.288724   38617 out.go:177] 
	W0817 21:21:47.290357   38617 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0817 21:21:47.292225   38617 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-545557 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-545557 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-545557 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (314.500042ms)

                                                
                                                
-- stdout --
	* [functional-545557] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:21:47.731723   38786 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:21:47.731879   38786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:21:47.731889   38786 out.go:309] Setting ErrFile to fd 2...
	I0817 21:21:47.731895   38786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:21:47.732250   38786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	I0817 21:21:47.735045   38786 out.go:303] Setting JSON to false
	I0817 21:21:47.736079   38786 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3847,"bootTime":1692303461,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0817 21:21:47.736173   38786 start.go:138] virtualization:  
	I0817 21:21:47.738833   38786 out.go:177] * [functional-545557] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I0817 21:21:47.741279   38786 notify.go:220] Checking for updates...
	I0817 21:21:47.744303   38786 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:21:47.746305   38786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:21:47.748663   38786 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:21:47.750449   38786 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	I0817 21:21:47.752371   38786 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 21:21:47.754157   38786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:21:47.756606   38786 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:21:47.757077   38786 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:21:47.832511   38786 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:21:47.832599   38786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:21:47.961400   38786 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-08-17 21:21:47.951075119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:21:47.961507   38786 docker.go:294] overlay module found
	I0817 21:21:47.964012   38786 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0817 21:21:47.965712   38786 start.go:298] selected driver: docker
	I0817 21:21:47.965729   38786 start.go:902] validating driver "docker" against &{Name:functional-545557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-545557 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:21:47.965835   38786 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:21:47.968524   38786 out.go:177] 
	W0817 21:21:47.970350   38786 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0817 21:21:47.971854   38786 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-545557 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-545557 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-skqsk" [48c95ac3-61e3-4b12-ab80-de20d1b2c35f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-skqsk" [48c95ac3-61e3-4b12-ab80-de20d1b2c35f] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.024195319s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32560
functional_test.go:1674: http://192.168.49.2:32560: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58d66798bb-skqsk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32560
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c2ba22e9-5bd3-4857-869e-b25ea0e1a08d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011391031s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-545557 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-545557 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-545557 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-545557 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [aa5f1f83-f320-4572-b988-8d9186ccb165] Pending
helpers_test.go:344: "sp-pod" [aa5f1f83-f320-4572-b988-8d9186ccb165] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [aa5f1f83-f320-4572-b988-8d9186ccb165] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.015486462s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-545557 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-545557 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-545557 delete -f testdata/storage-provisioner/pod.yaml: (1.093185634s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-545557 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [021a90d5-0768-45ff-bf48-dce9ccff0474] Pending
helpers_test.go:344: "sp-pod" [021a90d5-0768-45ff-bf48-dce9ccff0474] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [021a90d5-0768-45ff-bf48-dce9ccff0474] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.016103435s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-545557 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh -n functional-545557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 cp functional-545557:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1459808010/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh -n functional-545557 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7745/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "sudo cat /etc/test/nested/copy/7745/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7745.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "sudo cat /etc/ssl/certs/7745.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7745.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "sudo cat /usr/share/ca-certificates/7745.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/77452.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "sudo cat /etc/ssl/certs/77452.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/77452.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "sudo cat /usr/share/ca-certificates/77452.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-545557 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-545557 ssh "sudo systemctl is-active docker": exit status 1 (389.442939ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-545557 ssh "sudo systemctl is-active crio": exit status 1 (395.391387ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-545557 version -o=json --components: (1.41600942s)
--- PASS: TestFunctional/parallel/Version/components (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-545557 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.4
registry.k8s.io/kube-proxy:v1.27.4
registry.k8s.io/kube-controller-manager:v1.27.4
registry.k8s.io/kube-apiserver:v1.27.4
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-545557
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-545557 image ls --format short --alsologtostderr:
I0817 21:21:55.873399   40146 out.go:296] Setting OutFile to fd 1 ...
I0817 21:21:55.873516   40146 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:21:55.873521   40146 out.go:309] Setting ErrFile to fd 2...
I0817 21:21:55.873525   40146 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:21:55.873775   40146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
I0817 21:21:55.874345   40146 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
I0817 21:21:55.874454   40146 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
I0817 21:21:55.874971   40146 cli_runner.go:164] Run: docker container inspect functional-545557 --format={{.State.Status}}
I0817 21:21:55.894140   40146 ssh_runner.go:195] Run: systemctl --version
I0817 21:21:55.894189   40146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
I0817 21:21:55.913098   40146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
I0817 21:21:56.015372   40146 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-545557 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | sha256:b18bf7 | 25.3MB |
| docker.io/library/nginx                     | latest             | sha256:ab73c7 | 67.2MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-scheduler              | v1.27.4            | sha256:6eb638 | 16.6MB |
| registry.k8s.io/kube-apiserver              | v1.27.4            | sha256:64aece | 30.4MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/minikube-local-cache-test | functional-545557  | sha256:79bdd8 | 1.01kB |
| docker.io/library/nginx                     | alpine             | sha256:397432 | 17.6MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/etcd                        | 3.5.7-0            | sha256:24bc64 | 80.7MB |
| registry.k8s.io/kube-controller-manager     | v1.27.4            | sha256:389f6f | 28.2MB |
| registry.k8s.io/kube-proxy                  | v1.27.4            | sha256:532e5a | 21.4MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-545557 image ls --format table --alsologtostderr:
I0817 21:21:57.782876   40353 out.go:296] Setting OutFile to fd 1 ...
I0817 21:21:57.783058   40353 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:21:57.783067   40353 out.go:309] Setting ErrFile to fd 2...
I0817 21:21:57.783072   40353 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:21:57.783441   40353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
I0817 21:21:57.784221   40353 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
I0817 21:21:57.784364   40353 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
I0817 21:21:57.785032   40353 cli_runner.go:164] Run: docker container inspect functional-545557 --format={{.State.Status}}
I0817 21:21:57.807467   40353 ssh_runner.go:195] Run: systemctl --version
I0817 21:21:57.807537   40353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
I0817 21:21:57.829874   40353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
I0817 21:21:57.920951   40353 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-545557 image ls --format json --alsologtostderr:
[{"id":"sha256:b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"25334607"},{"id":"sha256:532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317","repoDigests":["registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.4"],"size":"21370483"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac","repoDigests":["docker.io/library/nginx@sha256:13d22ec63300e16014d4a42aed735207a
8b33c223cff19627dd3042e5a10a3a0"],"repoTags":["docker.io/library/nginx:latest"],"size":"67190345"},{"id":"sha256:64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.4"],"size":"30391720"},{"id":"sha256:6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085","repoDigests":["registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.4"],"size":"16552982"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.
k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"80665728"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:79bdd8d004334156f7debb7ca07195b9e0db3ec6303f5b5e63c19067312c6df1","repoDigests":[],"repoTags":["docker.io
/library/minikube-local-cache-test:functional-545557"],"size":"1007"},{"id":"sha256:397432849901d4b78b8fda5db7d50e074ac273977a4a78ce47ad069d4a15e091","repoDigests":["docker.io/library/nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17568094"},{"id":"sha256:389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.4"],"size":"28222966"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":
"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-545557 image ls --format json --alsologtostderr:
I0817 21:21:57.534095   40328 out.go:296] Setting OutFile to fd 1 ...
I0817 21:21:57.534288   40328 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:21:57.534308   40328 out.go:309] Setting ErrFile to fd 2...
I0817 21:21:57.534324   40328 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:21:57.534716   40328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
I0817 21:21:57.535688   40328 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
I0817 21:21:57.535878   40328 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
I0817 21:21:57.536595   40328 cli_runner.go:164] Run: docker container inspect functional-545557 --format={{.State.Status}}
I0817 21:21:57.557458   40328 ssh_runner.go:195] Run: systemctl --version
I0817 21:21:57.557512   40328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
I0817 21:21:57.581058   40328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
I0817 21:21:57.672242   40328 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-545557 image ls --format yaml --alsologtostderr:
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "25334607"
- id: sha256:79bdd8d004334156f7debb7ca07195b9e0db3ec6303f5b5e63c19067312c6df1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-545557
size: "1007"
- id: sha256:6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.4
size: "16552982"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:397432849901d4b78b8fda5db7d50e074ac273977a4a78ce47ad069d4a15e091
repoDigests:
- docker.io/library/nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385
repoTags:
- docker.io/library/nginx:alpine
size: "17568094"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.4
size: "28222966"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf
repoTags:
- registry.k8s.io/kube-proxy:v1.27.4
size: "21370483"
- id: sha256:64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.4
size: "30391720"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac
repoDigests:
- docker.io/library/nginx@sha256:13d22ec63300e16014d4a42aed735207a8b33c223cff19627dd3042e5a10a3a0
repoTags:
- docker.io/library/nginx:latest
size: "67190345"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "80665728"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-545557 image ls --format yaml --alsologtostderr:
I0817 21:21:56.111560   40182 out.go:296] Setting OutFile to fd 1 ...
I0817 21:21:56.111783   40182 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:21:56.111813   40182 out.go:309] Setting ErrFile to fd 2...
I0817 21:21:56.111833   40182 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:21:56.112144   40182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
I0817 21:21:56.112861   40182 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
I0817 21:21:56.113014   40182 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
I0817 21:21:56.113536   40182 cli_runner.go:164] Run: docker container inspect functional-545557 --format={{.State.Status}}
I0817 21:21:56.131937   40182 ssh_runner.go:195] Run: systemctl --version
I0817 21:21:56.131984   40182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
I0817 21:21:56.152206   40182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
I0817 21:21:56.245912   40182 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-545557 ssh pgrep buildkitd: exit status 1 (296.050559ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image build -t localhost/my-image:functional-545557 testdata/build --alsologtostderr
2023/08/17 21:21:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-545557 image build -t localhost/my-image:functional-545557 testdata/build --alsologtostderr: (2.441225127s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-545557 image build -t localhost/my-image:functional-545557 testdata/build --alsologtostderr:
I0817 21:21:56.645826   40261 out.go:296] Setting OutFile to fd 1 ...
I0817 21:21:56.646040   40261 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:21:56.646071   40261 out.go:309] Setting ErrFile to fd 2...
I0817 21:21:56.646091   40261 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:21:56.646365   40261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
I0817 21:21:56.647120   40261 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
I0817 21:21:56.647755   40261 config.go:182] Loaded profile config "functional-545557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
I0817 21:21:56.648339   40261 cli_runner.go:164] Run: docker container inspect functional-545557 --format={{.State.Status}}
I0817 21:21:56.666556   40261 ssh_runner.go:195] Run: systemctl --version
I0817 21:21:56.666609   40261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-545557
I0817 21:21:56.689947   40261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/functional-545557/id_rsa Username:docker}
I0817 21:21:56.784337   40261 build_images.go:151] Building image from path: /tmp/build.1788441882.tar
I0817 21:21:56.784411   40261 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0817 21:21:56.795582   40261 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1788441882.tar
I0817 21:21:56.800222   40261 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1788441882.tar: stat -c "%s %y" /var/lib/minikube/build/build.1788441882.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1788441882.tar': No such file or directory
I0817 21:21:56.800252   40261 ssh_runner.go:362] scp /tmp/build.1788441882.tar --> /var/lib/minikube/build/build.1788441882.tar (3072 bytes)
I0817 21:21:56.830347   40261 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1788441882
I0817 21:21:56.842453   40261 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1788441882 -xf /var/lib/minikube/build/build.1788441882.tar
I0817 21:21:56.853425   40261 containerd.go:378] Building image: /var/lib/minikube/build/build.1788441882
I0817 21:21:56.853492   40261 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1788441882 --local dockerfile=/var/lib/minikube/build/build.1788441882 --output type=image,name=localhost/my-image:functional-545557
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.8s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#4 DONE 0.0s

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B 0.0s done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#4 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#4 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:c3398fa2cd2be69e822d46ed5180a80cb7f0a40006f50bf8f18c5531c438e073 0.0s done
#8 exporting config sha256:330bb59881e7378cefcc3e35ec6c95a2f873b2d80285b7fb8b291029e438e9c3 0.0s done
#8 naming to localhost/my-image:functional-545557 done
#8 DONE 0.1s
I0817 21:21:58.998651   40261 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1788441882 --local dockerfile=/var/lib/minikube/build/build.1788441882 --output type=image,name=localhost/my-image:functional-545557: (2.145096596s)
I0817 21:21:58.998715   40261 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1788441882
I0817 21:21:59.010032   40261 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1788441882.tar
I0817 21:21:59.021659   40261 build_images.go:207] Built localhost/my-image:functional-545557 from /tmp/build.1788441882.tar
I0817 21:21:59.021686   40261 build_images.go:123] succeeded building to: functional-545557
I0817 21:21:59.021691   40261 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.760762879s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-545557
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-545557 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-545557 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-lmjrv" [1cdd3797-c1a6-4c3b-9d78-d5b43ca499e1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-lmjrv" [1cdd3797-c1a6-4c3b-9d78-d5b43ca499e1] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.033537736s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 service list -o json
functional_test.go:1493: Took "425.443639ms" to run "out/minikube-linux-arm64 -p functional-545557 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32565
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32565
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-545557 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-545557 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-545557 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 36619: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-545557 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image rm gcr.io/google-containers/addon-resizer:functional-545557 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-545557 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-545557 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [cd5db3ee-b6a4-492f-9b56-06022b06c65f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [cd5db3ee-b6a4-492f-9b56-06022b06c65f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.016238915s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-545557
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 image save --daemon gcr.io/google-containers/addon-resizer:functional-545557 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-545557
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-545557 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.110.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-545557 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "352.428793ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "62.701801ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "342.955964ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "60.261366ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-545557 /tmp/TestFunctionalparallelMountCmdany-port1828161169/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1692307300752008174" to /tmp/TestFunctionalparallelMountCmdany-port1828161169/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1692307300752008174" to /tmp/TestFunctionalparallelMountCmdany-port1828161169/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1692307300752008174" to /tmp/TestFunctionalparallelMountCmdany-port1828161169/001/test-1692307300752008174
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-545557 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (401.7046ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 17 21:21 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 17 21:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 17 21:21 test-1692307300752008174
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh cat /mount-9p/test-1692307300752008174
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-545557 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [dbbadc43-254e-4ade-82d2-3aeefafc1658] Pending
helpers_test.go:344: "busybox-mount" [dbbadc43-254e-4ade-82d2-3aeefafc1658] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [dbbadc43-254e-4ade-82d2-3aeefafc1658] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [dbbadc43-254e-4ade-82d2-3aeefafc1658] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.017871535s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-545557 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-545557 /tmp/TestFunctionalparallelMountCmdany-port1828161169/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-545557 /tmp/TestFunctionalparallelMountCmdspecific-port807455357/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-545557 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (611.874007ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-545557 /tmp/TestFunctionalparallelMountCmdspecific-port807455357/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-545557 ssh "sudo umount -f /mount-9p": exit status 1 (320.002237ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-545557 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-545557 /tmp/TestFunctionalparallelMountCmdspecific-port807455357/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-545557 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3790486554/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-545557 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3790486554/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-545557 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3790486554/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-545557 ssh "findmnt -T" /mount1: exit status 1 (937.676003ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-545557 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-545557 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-545557 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3790486554/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-545557 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3790486554/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-545557 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3790486554/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.63s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-545557
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-545557
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-545557
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (87.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-679314 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0817 21:22:08.503990    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-679314 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m27.749698812s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (87.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-679314 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-679314 addons enable ingress --alsologtostderr -v=5: (10.023881373s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.7s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-679314 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (95.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-981095 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0817 21:24:52.344179    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:26:07.547979    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
E0817 21:26:07.553232    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
E0817 21:26:07.563445    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
E0817 21:26:07.583662    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
E0817 21:26:07.623828    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
E0817 21:26:07.704082    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
E0817 21:26:07.864410    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
E0817 21:26:08.184984    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
E0817 21:26:08.825854    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-981095 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m35.816609883s)
--- PASS: TestJSONOutput/start/Command (95.82s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-981095 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-981095 --output=json --user=testUser
E0817 21:26:10.106672    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-981095 --output=json --user=testUser
E0817 21:26:12.666925    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-981095 --output=json --user=testUser: (5.87534338s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-189068 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-189068 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.911633ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cbd9f076-4325-4c88-9624-61f14f8a14b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-189068] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ce449d8-1d97-42be-b081-79fe5c3f1f74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16865"}}
	{"specversion":"1.0","id":"d367eb8a-6d1a-45b1-a17d-8e153609e8f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bde20dbc-5953-4c96-9408-49069435c3b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig"}}
	{"specversion":"1.0","id":"0a1210f6-876e-48c5-b19d-bb3a7f3e29a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube"}}
	{"specversion":"1.0","id":"892de694-11d0-4a43-99f3-c26dfddf5ad8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d36f326a-5251-4fde-b4c8-0110192a11d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"279ed7a1-53f0-4145-90bd-75fec2479f58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-189068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-189068
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.57s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-543886 --network=
E0817 21:26:28.028121    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
E0817 21:26:48.508342    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-543886 --network=: (39.410880256s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-543886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-543886
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-543886: (2.134853283s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.57s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.76s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-966814 --network=bridge
E0817 21:27:29.469255    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-966814 --network=bridge: (31.794765041s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-966814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-966814
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-966814: (1.947160215s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.76s)

                                                
                                    
x
+
TestKicExistingNetwork (32.09s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-444479 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-444479 --network=existing-network: (29.988108371s)
helpers_test.go:175: Cleaning up "existing-network-444479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-444479
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-444479: (1.93724791s)
--- PASS: TestKicExistingNetwork (32.09s)

                                                
                                    
x
+
TestKicCustomSubnet (37.94s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-781833 --subnet=192.168.60.0/24
E0817 21:28:40.497961    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 21:28:40.503219    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 21:28:40.513468    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 21:28:40.533701    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 21:28:40.573933    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 21:28:40.654229    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 21:28:40.814588    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 21:28:41.134965    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 21:28:41.775810    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 21:28:43.056694    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-781833 --subnet=192.168.60.0/24: (35.814053223s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-781833 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-781833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-781833
E0817 21:28:45.617342    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-781833: (2.108229669s)
--- PASS: TestKicCustomSubnet (37.94s)

                                                
                                    
x
+
TestKicStaticIP (34.56s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-473175 --static-ip=192.168.200.200
E0817 21:28:50.737554    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 21:28:51.390422    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
E0817 21:29:00.977711    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-473175 --static-ip=192.168.200.200: (32.288268526s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-473175 ip
helpers_test.go:175: Cleaning up "static-ip-473175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-473175
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-473175: (2.116969772s)
--- PASS: TestKicStaticIP (34.56s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (78.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-877543 --driver=docker  --container-runtime=containerd
E0817 21:29:21.458824    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 21:29:24.658280    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-877543 --driver=docker  --container-runtime=containerd: (40.685842296s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-880160 --driver=docker  --container-runtime=containerd
E0817 21:30:02.419249    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-880160 --driver=docker  --container-runtime=containerd: (32.191172647s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-877543
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-880160
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-880160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-880160
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-880160: (2.017673938s)
helpers_test.go:175: Cleaning up "first-877543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-877543
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-877543: (2.200659005s)
--- PASS: TestMinikubeProfile (78.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-206203 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-206203 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.942307356s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-206203 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-208090 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-208090 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.593074639s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-208090 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-206203 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-206203 --alsologtostderr -v=5: (1.653534276s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-208090 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-208090
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-208090: (1.216848429s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-208090
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-208090: (6.956524098s)
--- PASS: TestMountStart/serial/RestartStopped (7.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-208090 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-368596 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0817 21:31:24.340072    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 21:31:35.231398    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-368596 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m45.000424829s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-368596 -- rollout status deployment/busybox: (2.682152303s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- exec busybox-67b7f59bb-54r62 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- exec busybox-67b7f59bb-vmbsb -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- exec busybox-67b7f59bb-54r62 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- exec busybox-67b7f59bb-vmbsb -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- exec busybox-67b7f59bb-54r62 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- exec busybox-67b7f59bb-vmbsb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- exec busybox-67b7f59bb-54r62 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- exec busybox-67b7f59bb-54r62 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- exec busybox-67b7f59bb-vmbsb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-368596 -- exec busybox-67b7f59bb-vmbsb -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.18s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-368596 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-368596 -v 3 --alsologtostderr: (18.422712522s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 cp testdata/cp-test.txt multinode-368596:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 cp multinode-368596:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3041074370/001/cp-test_multinode-368596.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 cp multinode-368596:/home/docker/cp-test.txt multinode-368596-m02:/home/docker/cp-test_multinode-368596_multinode-368596-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596-m02 "sudo cat /home/docker/cp-test_multinode-368596_multinode-368596-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 cp multinode-368596:/home/docker/cp-test.txt multinode-368596-m03:/home/docker/cp-test_multinode-368596_multinode-368596-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596-m03 "sudo cat /home/docker/cp-test_multinode-368596_multinode-368596-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 cp testdata/cp-test.txt multinode-368596-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 cp multinode-368596-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3041074370/001/cp-test_multinode-368596-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 cp multinode-368596-m02:/home/docker/cp-test.txt multinode-368596:/home/docker/cp-test_multinode-368596-m02_multinode-368596.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596 "sudo cat /home/docker/cp-test_multinode-368596-m02_multinode-368596.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 cp multinode-368596-m02:/home/docker/cp-test.txt multinode-368596-m03:/home/docker/cp-test_multinode-368596-m02_multinode-368596-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596-m03 "sudo cat /home/docker/cp-test_multinode-368596-m02_multinode-368596-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 cp testdata/cp-test.txt multinode-368596-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 cp multinode-368596-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3041074370/001/cp-test_multinode-368596-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 cp multinode-368596-m03:/home/docker/cp-test.txt multinode-368596:/home/docker/cp-test_multinode-368596-m03_multinode-368596.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596 "sudo cat /home/docker/cp-test_multinode-368596-m03_multinode-368596.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 cp multinode-368596-m03:/home/docker/cp-test.txt multinode-368596-m02:/home/docker/cp-test_multinode-368596-m03_multinode-368596-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 ssh -n multinode-368596-m02 "sudo cat /home/docker/cp-test_multinode-368596-m03_multinode-368596-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-368596 node stop m03: (1.227369082s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-368596 status: exit status 7 (544.238659ms)

                                                
                                                
-- stdout --
	multinode-368596
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-368596-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-368596-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-368596 status --alsologtostderr: exit status 7 (545.114515ms)

                                                
                                                
-- stdout --
	multinode-368596
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-368596-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-368596-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:33:32.201054   87577 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:33:32.201235   87577 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:33:32.201262   87577 out.go:309] Setting ErrFile to fd 2...
	I0817 21:33:32.201281   87577 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:33:32.201542   87577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	I0817 21:33:32.201754   87577 out.go:303] Setting JSON to false
	I0817 21:33:32.201899   87577 mustload.go:65] Loading cluster: multinode-368596
	I0817 21:33:32.202009   87577 notify.go:220] Checking for updates...
	I0817 21:33:32.202352   87577 config.go:182] Loaded profile config "multinode-368596": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:33:32.202386   87577 status.go:255] checking status of multinode-368596 ...
	I0817 21:33:32.203898   87577 cli_runner.go:164] Run: docker container inspect multinode-368596 --format={{.State.Status}}
	I0817 21:33:32.223588   87577 status.go:330] multinode-368596 host status = "Running" (err=<nil>)
	I0817 21:33:32.223661   87577 host.go:66] Checking if "multinode-368596" exists ...
	I0817 21:33:32.224166   87577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-368596
	I0817 21:33:32.243003   87577 host.go:66] Checking if "multinode-368596" exists ...
	I0817 21:33:32.243332   87577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:33:32.243387   87577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-368596
	I0817 21:33:32.278777   87577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/multinode-368596/id_rsa Username:docker}
	I0817 21:33:32.377066   87577 ssh_runner.go:195] Run: systemctl --version
	I0817 21:33:32.382568   87577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:33:32.396406   87577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:33:32.464334   87577 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-08-17 21:33:32.453807265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:33:32.465056   87577 kubeconfig.go:92] found "multinode-368596" server: "https://192.168.58.2:8443"
	I0817 21:33:32.465078   87577 api_server.go:166] Checking apiserver status ...
	I0817 21:33:32.465126   87577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:33:32.477779   87577 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1238/cgroup
	I0817 21:33:32.489083   87577 api_server.go:182] apiserver freezer: "7:freezer:/docker/4dba0df2cbc711efb5e345f9d60baee2a413441c76982d1a162637c221f61928/kubepods/burstable/pod7d797c53c5ccbf99579a4b0b00ca535a/8f6a647e86f64691ea1808b5ddf15d6e2a674708437785d5e17c403b37bd6000"
	I0817 21:33:32.489153   87577 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4dba0df2cbc711efb5e345f9d60baee2a413441c76982d1a162637c221f61928/kubepods/burstable/pod7d797c53c5ccbf99579a4b0b00ca535a/8f6a647e86f64691ea1808b5ddf15d6e2a674708437785d5e17c403b37bd6000/freezer.state
	I0817 21:33:32.499336   87577 api_server.go:204] freezer state: "THAWED"
	I0817 21:33:32.499367   87577 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 21:33:32.508209   87577 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0817 21:33:32.508236   87577 status.go:421] multinode-368596 apiserver status = Running (err=<nil>)
	I0817 21:33:32.508248   87577 status.go:257] multinode-368596 status: &{Name:multinode-368596 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0817 21:33:32.508271   87577 status.go:255] checking status of multinode-368596-m02 ...
	I0817 21:33:32.508571   87577 cli_runner.go:164] Run: docker container inspect multinode-368596-m02 --format={{.State.Status}}
	I0817 21:33:32.525984   87577 status.go:330] multinode-368596-m02 host status = "Running" (err=<nil>)
	I0817 21:33:32.526004   87577 host.go:66] Checking if "multinode-368596-m02" exists ...
	I0817 21:33:32.526287   87577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-368596-m02
	I0817 21:33:32.545368   87577 host.go:66] Checking if "multinode-368596-m02" exists ...
	I0817 21:33:32.545661   87577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:33:32.545704   87577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-368596-m02
	I0817 21:33:32.566009   87577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16865-2431/.minikube/machines/multinode-368596-m02/id_rsa Username:docker}
	I0817 21:33:32.657132   87577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:33:32.670724   87577 status.go:257] multinode-368596-m02 status: &{Name:multinode-368596-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0817 21:33:32.670758   87577 status.go:255] checking status of multinode-368596-m03 ...
	I0817 21:33:32.671055   87577 cli_runner.go:164] Run: docker container inspect multinode-368596-m03 --format={{.State.Status}}
	I0817 21:33:32.689048   87577 status.go:330] multinode-368596-m03 host status = "Stopped" (err=<nil>)
	I0817 21:33:32.689071   87577 status.go:343] host is not running, skipping remaining checks
	I0817 21:33:32.689079   87577 status.go:257] multinode-368596-m03 status: &{Name:multinode-368596-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 node start m03 --alsologtostderr
E0817 21:33:40.497449    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-368596 node start m03 --alsologtostderr: (11.86602472s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (147.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-368596
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-368596
E0817 21:34:08.180640    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-368596: (25.144813264s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-368596 --wait=true -v=8 --alsologtostderr
E0817 21:34:24.657296    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:35:47.704880    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 21:36:07.547310    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-368596 --wait=true -v=8 --alsologtostderr: (2m1.992361384s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-368596
--- PASS: TestMultiNode/serial/RestartKeepsNodes (147.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-368596 node delete m03: (4.350094772s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-368596 stop: (23.911870358s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-368596 status: exit status 7 (85.966414ms)

                                                
                                                
-- stdout --
	multinode-368596
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-368596-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-368596 status --alsologtostderr: exit status 7 (91.893835ms)

                                                
                                                
-- stdout --
	multinode-368596
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-368596-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:36:41.815972   96262 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:36:41.816169   96262 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:36:41.816195   96262 out.go:309] Setting ErrFile to fd 2...
	I0817 21:36:41.816216   96262 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:36:41.816483   96262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	I0817 21:36:41.816691   96262 out.go:303] Setting JSON to false
	I0817 21:36:41.816823   96262 mustload.go:65] Loading cluster: multinode-368596
	I0817 21:36:41.816961   96262 notify.go:220] Checking for updates...
	I0817 21:36:41.817227   96262 config.go:182] Loaded profile config "multinode-368596": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:36:41.817267   96262 status.go:255] checking status of multinode-368596 ...
	I0817 21:36:41.817769   96262 cli_runner.go:164] Run: docker container inspect multinode-368596 --format={{.State.Status}}
	I0817 21:36:41.837268   96262 status.go:330] multinode-368596 host status = "Stopped" (err=<nil>)
	I0817 21:36:41.837286   96262 status.go:343] host is not running, skipping remaining checks
	I0817 21:36:41.837293   96262 status.go:257] multinode-368596 status: &{Name:multinode-368596 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0817 21:36:41.837324   96262 status.go:255] checking status of multinode-368596-m02 ...
	I0817 21:36:41.837691   96262 cli_runner.go:164] Run: docker container inspect multinode-368596-m02 --format={{.State.Status}}
	I0817 21:36:41.860124   96262 status.go:330] multinode-368596-m02 host status = "Stopped" (err=<nil>)
	I0817 21:36:41.860143   96262 status.go:343] host is not running, skipping remaining checks
	I0817 21:36:41.860150   96262 status.go:257] multinode-368596-m02 status: &{Name:multinode-368596-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (97.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-368596 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-368596 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m36.238163522s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-368596 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (97.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-368596
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-368596-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-368596-m02 --driver=docker  --container-runtime=containerd: exit status 14 (80.076183ms)

                                                
                                                
-- stdout --
	* [multinode-368596-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-368596-m02' is duplicated with machine name 'multinode-368596-m02' in profile 'multinode-368596'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-368596-m03 --driver=docker  --container-runtime=containerd
E0817 21:38:40.497308    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-368596-m03 --driver=docker  --container-runtime=containerd: (33.582334405s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-368596
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-368596: exit status 80 (361.976318ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-368596
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-368596-m03 already exists in multinode-368596-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-368596-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-368596-m03: (2.082972348s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.16s)

                                                
                                    
x
+
TestPreload (167.5s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-983380 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0817 21:39:24.657696    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-983380 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m27.555905621s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-983380 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-983380 image pull gcr.io/k8s-minikube/busybox: (1.366202769s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-983380
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-983380: (12.024305978s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-983380 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0817 21:41:07.547000    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-983380 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m3.954176425s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-983380 image list
helpers_test.go:175: Cleaning up "test-preload-983380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-983380
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-983380: (2.357692651s)
--- PASS: TestPreload (167.50s)

                                                
                                    
x
+
TestScheduledStopUnix (118.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-906964 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-906964 --memory=2048 --driver=docker  --container-runtime=containerd: (41.877935836s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-906964 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-906964 -n scheduled-stop-906964
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-906964 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-906964 --cancel-scheduled
E0817 21:42:30.592540    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-906964 -n scheduled-stop-906964
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-906964
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-906964 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-906964
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-906964: exit status 7 (78.311644ms)

                                                
                                                
-- stdout --
	scheduled-stop-906964
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-906964 -n scheduled-stop-906964
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-906964 -n scheduled-stop-906964: exit status 7 (84.962292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-906964" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-906964
E0817 21:43:40.497256    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-906964: (4.634610737s)
--- PASS: TestScheduledStopUnix (118.16s)

                                                
                                    
x
+
TestInsufficientStorage (10.9s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-553805 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-553805 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.417794922s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0b487d21-5015-468f-b83d-9b1b62a61187","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-553805] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9818dd9b-ca72-4fea-ace3-9190a609316d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16865"}}
	{"specversion":"1.0","id":"6bde7382-e921-43f4-83b3-0264a743e668","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"774757ff-226a-469d-b4ba-1c2712063f9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig"}}
	{"specversion":"1.0","id":"53e64a39-d9ec-4b43-8d7f-0067636a055a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube"}}
	{"specversion":"1.0","id":"48deca29-291b-4a5f-a426-5086dfec8144","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9c7c9254-c415-439c-969a-794f689d5497","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e91f7515-4c38-4caf-ac78-97cafc2ff1b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f053e264-8554-44b3-bd7c-e4de5d26ab8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f1a66a8c-26ec-4ce0-a7c6-d4e65dfc897a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3897486-dbbf-4fd9-9e8d-637c4bd38bd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c22eafd9-1825-4c01-bcaa-bcda7ef44108","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-553805 in cluster insufficient-storage-553805","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"290bd4c5-3012-44dd-be5d-b16bb28fe657","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"926eea6b-d64e-4d9f-b667-9a38b947da97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c141a736-6425-4b89-95c4-5b909c014a99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-553805 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-553805 --output=json --layout=cluster: exit status 7 (304.78597ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-553805","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-553805","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 21:43:53.363349  113483 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-553805" does not appear in /home/jenkins/minikube-integration/16865-2431/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-553805 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-553805 --output=json --layout=cluster: exit status 7 (304.93713ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-553805","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-553805","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 21:43:53.668729  113536 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-553805" does not appear in /home/jenkins/minikube-integration/16865-2431/kubeconfig
	E0817 21:43:53.680470  113536 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/insufficient-storage-553805/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-553805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-553805
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-553805: (1.86927454s)
--- PASS: TestInsufficientStorage (10.90s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (122.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.22.0.2719809670.exe start -p running-upgrade-095015 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0817 21:51:07.547545    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.22.0.2719809670.exe start -p running-upgrade-095015 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m17.743598537s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-095015 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:142: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-095015 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.240580768s)
helpers_test.go:175: Cleaning up "running-upgrade-095015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-095015
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-095015: (2.963375407s)
--- PASS: TestRunningBinaryUpgrade (122.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (415.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-483730 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-483730 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.618010141s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-483730
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-483730: (4.381009418s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-483730 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-483730 status --format={{.Host}}: exit status 7 (100.094406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-483730 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-483730 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m17.300234036s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-483730 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-483730 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-483730 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (122.894745ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-483730] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-483730
	    minikube start -p kubernetes-upgrade-483730 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4837302 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-483730 --kubernetes-version=v1.28.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-483730 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-483730 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.459276831s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-483730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-483730
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-483730: (2.129397797s)
--- PASS: TestKubernetesUpgrade (415.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-111976 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-111976 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (78.12059ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-111976] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-111976 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-111976 --driver=docker  --container-runtime=containerd: (49.100481625s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-111976 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-111976 --no-kubernetes --driver=docker  --container-runtime=containerd
E0817 21:45:03.540856    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-111976 --no-kubernetes --driver=docker  --container-runtime=containerd: (20.109314439s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-111976 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-111976 status -o json: exit status 2 (366.360239ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-111976","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-111976
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-111976: (1.990173648s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-111976 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-111976 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.434859151s)
--- PASS: TestNoKubernetes/serial/Start (7.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-111976 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-111976 "sudo systemctl is-active --quiet service kubelet": exit status 1 (336.753981ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-111976
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-111976: (1.217881921s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-111976 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-111976 --driver=docker  --container-runtime=containerd: (6.645365438s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-111976 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-111976 "sudo systemctl is-active --quiet service kubelet": exit status 1 (424.757996ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (138.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.22.0.3034243523.exe start -p stopped-upgrade-605651 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0817 21:48:40.497452    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.22.0.3034243523.exe start -p stopped-upgrade-605651 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m16.325506158s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.22.0.3034243523.exe -p stopped-upgrade-605651 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.22.0.3034243523.exe -p stopped-upgrade-605651 stop: (12.938072615s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-605651 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0817 21:49:24.657767    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
version_upgrade_test.go:210: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-605651 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (48.958288539s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (138.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-605651
version_upgrade_test.go:218: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-605651: (1.310426735s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                    
x
+
TestPause/serial/Start (107.37s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-565601 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-565601 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m47.367439991s)
--- PASS: TestPause/serial/Start (107.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-893741 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-893741 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (232.384158ms)

                                                
                                                
-- stdout --
	* [false-893741] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:53:07.150267  148858 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:53:07.150802  148858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:53:07.150835  148858 out.go:309] Setting ErrFile to fd 2...
	I0817 21:53:07.150859  148858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:53:07.151167  148858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-2431/.minikube/bin
	I0817 21:53:07.151604  148858 out.go:303] Setting JSON to false
	I0817 21:53:07.152602  148858 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5726,"bootTime":1692303461,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0817 21:53:07.152701  148858 start.go:138] virtualization:  
	I0817 21:53:07.155424  148858 out.go:177] * [false-893741] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0817 21:53:07.157854  148858 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:53:07.158468  148858 notify.go:220] Checking for updates...
	I0817 21:53:07.161702  148858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:53:07.163680  148858 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-2431/kubeconfig
	I0817 21:53:07.165235  148858 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-2431/.minikube
	I0817 21:53:07.167275  148858 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 21:53:07.169006  148858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:53:07.172818  148858 config.go:182] Loaded profile config "pause-565601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.4
	I0817 21:53:07.172949  148858 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:53:07.202545  148858 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:53:07.202652  148858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:53:07.318806  148858 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-17 21:53:07.304376541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0817 21:53:07.318929  148858 docker.go:294] overlay module found
	I0817 21:53:07.320855  148858 out.go:177] * Using the docker driver based on user configuration
	I0817 21:53:07.322437  148858 start.go:298] selected driver: docker
	I0817 21:53:07.322456  148858 start.go:902] validating driver "docker" against <nil>
	I0817 21:53:07.322469  148858 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:53:07.324916  148858 out.go:177] 
	W0817 21:53:07.326869  148858 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0817 21:53:07.329469  148858 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-893741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-893741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-893741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-893741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-893741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-893741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-893741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-893741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-893741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-893741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-893741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-893741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-893741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-893741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-893741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893741"

                                                
                                                
----------------------- debugLogs end: false-893741 [took: 4.232751702s] --------------------------------
helpers_test.go:175: Cleaning up "false-893741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-893741
--- PASS: TestNetworkPlugins/group/false (4.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (17.36s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-565601 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-565601 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.343725243s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (17.36s)

                                                
                                    
x
+
TestPause/serial/Pause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-565601 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.94s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-565601 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-565601 --output=json --layout=cluster: exit status 2 (454.998156ms)

                                                
                                                
-- stdout --
	{"Name":"pause-565601","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-565601","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.46s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-565601 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.94s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.13s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-565601 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-565601 --alsologtostderr -v=5: (1.130704436s)
--- PASS: TestPause/serial/PauseAgain (1.13s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.59s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-565601 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-565601 --alsologtostderr -v=5: (2.592719018s)
--- PASS: TestPause/serial/DeletePaused (2.59s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-565601
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-565601: exit status 1 (19.064415ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-565601: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (121.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-331855 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0817 21:56:07.547632    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-331855 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m1.500725004s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (121.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-331855 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a092b912-5819-4432-a8ae-2cc1c804cf27] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a092b912-5819-4432-a8ae-2cc1c804cf27] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.032268351s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-331855 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-331855 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-331855 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-331855 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-331855 --alsologtostderr -v=3: (12.138306934s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-331855 -n old-k8s-version-331855
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-331855 -n old-k8s-version-331855: exit status 7 (69.614483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-331855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (655.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-331855 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-331855 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (10m55.082643838s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-331855 -n old-k8s-version-331855
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (655.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (93.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-773648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0-rc.1
E0817 21:58:40.497260    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 21:59:10.592735    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-773648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0-rc.1: (1m33.994766191s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (93.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-773648 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3a712dfd-0598-4dd7-87a5-1fcbce19e3db] Pending
helpers_test.go:344: "busybox" [3a712dfd-0598-4dd7-87a5-1fcbce19e3db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3a712dfd-0598-4dd7-87a5-1fcbce19e3db] Running
E0817 21:59:24.658133    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.038057605s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-773648 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-773648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-773648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.072797356s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-773648 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-773648 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-773648 --alsologtostderr -v=3: (12.103639304s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-773648 -n no-preload-773648
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-773648 -n no-preload-773648: exit status 7 (70.766731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-773648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (352.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-773648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0-rc.1
E0817 22:01:07.547727    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
E0817 22:01:43.541459    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 22:03:40.497356    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 22:04:24.658064    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-773648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0-rc.1: (5m51.598471653s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-773648 -n no-preload-773648
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (352.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5dcjf" [ab49cfe6-addb-411f-b67c-367b127cf610] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5dcjf" [ab49cfe6-addb-411f-b67c-367b127cf610] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.033506976s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5dcjf" [ab49cfe6-addb-411f-b67c-367b127cf610] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010223767s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-773648 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-773648 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-773648 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-773648 -n no-preload-773648
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-773648 -n no-preload-773648: exit status 2 (364.129497ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-773648 -n no-preload-773648
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-773648 -n no-preload-773648: exit status 2 (343.385034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-773648 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-773648 -n no-preload-773648
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-773648 -n no-preload-773648
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-441721 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.4
E0817 22:06:07.546993    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-441721 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.4: (1m29.973179973s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-441721 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a4011985-e9ef-4ac5-9b52-5efa1d84e700] Pending
helpers_test.go:344: "busybox" [a4011985-e9ef-4ac5-9b52-5efa1d84e700] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a4011985-e9ef-4ac5-9b52-5efa1d84e700] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.046159301s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-441721 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-441721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-441721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.396227982s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-441721 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-441721 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-441721 --alsologtostderr -v=3: (12.129542109s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-441721 -n embed-certs-441721
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-441721 -n embed-certs-441721: exit status 7 (74.130166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-441721 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (342.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-441721 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-441721 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.4: (5m42.084714189s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-441721 -n embed-certs-441721
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (342.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-mb4zt" [555de2ab-8629-4610-821b-8a54b2c19ab7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025329953s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-mb4zt" [555de2ab-8629-4610-821b-8a54b2c19ab7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00960862s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-331855 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-331855 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-331855 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-331855 --alsologtostderr -v=1: (1.284724986s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-331855 -n old-k8s-version-331855
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-331855 -n old-k8s-version-331855: exit status 2 (568.921278ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-331855 -n old-k8s-version-331855
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-331855 -n old-k8s-version-331855: exit status 2 (487.280479ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-331855 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-331855 --alsologtostderr -v=1: (1.018704059s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-331855 -n old-k8s-version-331855
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-331855 -n old-k8s-version-331855
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-989644 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.4
E0817 22:08:40.497742    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
E0817 22:09:07.706588    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 22:09:18.135439    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:09:18.141504    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:09:18.151759    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:09:18.172021    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:09:18.212186    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:09:18.292508    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:09:18.452855    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:09:18.773691    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:09:19.413876    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:09:20.694689    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:09:23.255058    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:09:24.658293    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 22:09:28.375820    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:09:38.616633    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-989644 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.4: (1m22.907532587s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-989644 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f41d84d3-53b9-4268-8c59-4a2c89dc95e8] Pending
helpers_test.go:344: "busybox" [f41d84d3-53b9-4268-8c59-4a2c89dc95e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f41d84d3-53b9-4268-8c59-4a2c89dc95e8] Running
E0817 22:09:59.096818    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.035490285s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-989644 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-989644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-989644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.139366898s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-989644 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-989644 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-989644 --alsologtostderr -v=3: (12.124931136s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-989644 -n default-k8s-diff-port-989644
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-989644 -n default-k8s-diff-port-989644: exit status 7 (78.970981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-989644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-989644 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.4
E0817 22:10:40.057326    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:11:07.548013    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
E0817 22:11:55.582093    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:11:55.587397    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:11:55.597728    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:11:55.618052    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:11:55.658310    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:11:55.738728    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:11:55.899062    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:11:56.219467    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:11:56.860264    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:11:58.140966    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:12:00.701186    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:12:01.977525    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:12:05.822137    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:12:16.062650    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:12:36.543490    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
E0817 22:13:17.503901    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-989644 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.4: (5m54.488814566s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-989644 -n default-k8s-diff-port-989644
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-k27gf" [33f00ca6-4b38-4613-9b4f-333407aa6d99] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-k27gf" [33f00ca6-4b38-4613-9b4f-333407aa6d99] Running
E0817 22:13:40.497104    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/ingress-addon-legacy-679314/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.025238086s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-k27gf" [33f00ca6-4b38-4613-9b4f-333407aa6d99] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011567136s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-441721 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-441721 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-441721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-441721 -n embed-certs-441721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-441721 -n embed-certs-441721: exit status 2 (339.689735ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-441721 -n embed-certs-441721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-441721 -n embed-certs-441721: exit status 2 (346.156103ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-441721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-441721 -n embed-certs-441721
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-441721 -n embed-certs-441721
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-475681 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0-rc.1
E0817 22:14:18.136079    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
E0817 22:14:24.657382    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/addons-028423/client.crt: no such file or directory
E0817 22:14:39.424352    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-475681 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0-rc.1: (43.714783314s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-475681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-475681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.298858273s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-475681 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-475681 --alsologtostderr -v=3: (1.263412783s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-475681 -n newest-cni-475681
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-475681 -n newest-cni-475681: exit status 7 (63.799365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-475681 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-475681 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0-rc.1
E0817 22:14:45.818285    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/no-preload-773648/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-475681 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0-rc.1: (29.856935398s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-475681 -n newest-cni-475681
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-475681 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-475681 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-475681 -n newest-cni-475681
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-475681 -n newest-cni-475681: exit status 2 (324.172791ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-475681 -n newest-cni-475681
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-475681 -n newest-cni-475681: exit status 2 (353.739512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-475681 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-475681 -n newest-cni-475681
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-475681 -n newest-cni-475681
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0817 22:15:50.593418    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
E0817 22:16:07.547205    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m40.593811994s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-55wlk" [27a604db-12f7-4962-8af8-9737e28b45ba] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-55wlk" [27a604db-12f7-4962-8af8-9737e28b45ba] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.028427503s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-55wlk" [27a604db-12f7-4962-8af8-9737e28b45ba] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010999072s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-989644 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-989644 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-989644 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-989644 -n default-k8s-diff-port-989644
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-989644 -n default-k8s-diff-port-989644: exit status 2 (354.875671ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-989644 -n default-k8s-diff-port-989644
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-989644 -n default-k8s-diff-port-989644: exit status 2 (351.646202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-989644 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-989644 -n default-k8s-diff-port-989644
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-989644 -n default-k8s-diff-port-989644
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.28s)
E0817 22:22:10.972554    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/auto-893741/client.crt: no such file or directory
E0817 22:22:21.212816    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/auto-893741/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (90.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0817 22:16:55.582105    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/old-k8s-version-331855/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m30.794693599s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (90.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-893741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-893741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-v4fhf" [09966342-32c9-457b-b679-e1fcae5be0f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-v4fhf" [09966342-32c9-457b-b679-e1fcae5be0f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.016046474s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-893741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m23.449938502s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bv989" [96ae0015-fd10-4cfd-950a-d25e2bb1823d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.030344541s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-893741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-893741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-mqrxg" [62633498-111a-4786-a730-70cfd580485b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-mqrxg" [62633498-111a-4786-a730-70cfd580485b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.030969429s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-893741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m5.392554319s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-j27cj" [03830420-bd62-4877-b92e-d4889de14a42] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.051478734s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-893741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-893741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-w7r58" [05c3999f-bae8-47ae-b26d-166df0da6b0a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-w7r58" [05c3999f-bae8-47ae-b26d-166df0da6b0a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.018995392s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-893741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (45.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (45.41412578s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (45.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-893741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-893741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-p2pw4" [8d0a90be-5e79-4cb3-9b30-17a256749bb1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0817 22:19:53.869738    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/default-k8s-diff-port-989644/client.crt: no such file or directory
E0817 22:19:53.875772    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/default-k8s-diff-port-989644/client.crt: no such file or directory
E0817 22:19:53.886578    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/default-k8s-diff-port-989644/client.crt: no such file or directory
E0817 22:19:53.906844    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/default-k8s-diff-port-989644/client.crt: no such file or directory
E0817 22:19:53.947077    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/default-k8s-diff-port-989644/client.crt: no such file or directory
E0817 22:19:54.027331    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/default-k8s-diff-port-989644/client.crt: no such file or directory
E0817 22:19:54.188220    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/default-k8s-diff-port-989644/client.crt: no such file or directory
E0817 22:19:54.508710    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/default-k8s-diff-port-989644/client.crt: no such file or directory
E0817 22:19:55.149775    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/default-k8s-diff-port-989644/client.crt: no such file or directory
E0817 22:19:56.429912    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/default-k8s-diff-port-989644/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-p2pw4" [8d0a90be-5e79-4cb3-9b30-17a256749bb1] Running
E0817 22:19:58.990582    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/default-k8s-diff-port-989644/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.012736945s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-893741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-893741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (58.319817778s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-893741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-vvpqw" [251c86ab-c480-4165-bad2-725c8f2b6968] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-vvpqw" [251c86ab-c480-4165-bad2-725c8f2b6968] Running
E0817 22:20:34.835658    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/default-k8s-diff-port-989644/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.012249752s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (33.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-893741 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-893741 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.232241221s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-893741 exec deployment/netcat -- nslookup kubernetes.default
E0817 22:21:07.547471    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/functional-545557/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-893741 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.295769957s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-893741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (33.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4d7qm" [699267c7-a545-4821-a327-76b258d8ecc3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.032744758s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-893741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-893741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hgp4p" [37fca91d-8824-4213-ab28-4aa1025d281b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-hgp4p" [37fca91d-8824-4213-ab28-4aa1025d281b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.038601535s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-893741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (48.33337216s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-893741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-893741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-893741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-gcq6p" [85e26563-7569-438c-af62-dd0d78c80c0d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-gcq6p" [85e26563-7569-438c-af62-dd0d78c80c0d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.010895082s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (34.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-893741 exec deployment/netcat -- nslookup kubernetes.default
E0817 22:22:37.717220    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/default-k8s-diff-port-989644/client.crt: no such file or directory
E0817 22:22:41.693271    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/auto-893741/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-893741 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.2066807s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-893741 exec deployment/netcat -- nslookup kubernetes.default
E0817 22:23:04.131942    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kindnet-893741/client.crt: no such file or directory
E0817 22:23:04.137196    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kindnet-893741/client.crt: no such file or directory
E0817 22:23:04.147467    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kindnet-893741/client.crt: no such file or directory
E0817 22:23:04.167726    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kindnet-893741/client.crt: no such file or directory
E0817 22:23:04.208057    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kindnet-893741/client.crt: no such file or directory
E0817 22:23:04.288255    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kindnet-893741/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-893741 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.193004473s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0817 22:23:04.448663    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kindnet-893741/client.crt: no such file or directory
E0817 22:23:04.769170    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kindnet-893741/client.crt: no such file or directory
E0817 22:23:05.410123    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kindnet-893741/client.crt: no such file or directory
net_test.go:175: (dbg) Run:  kubectl --context bridge-893741 exec deployment/netcat -- nslookup kubernetes.default
E0817 22:23:06.690560    7745 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/kindnet-893741/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/DNS (34.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-893741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (31/310)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-403495 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-403495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-403495
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-122755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-122755
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-893741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-893741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-893741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-893741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-893741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-893741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-893741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-893741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-893741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-893741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-893741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-893741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-893741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-893741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-893741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893741"

                                                
                                                
----------------------- debugLogs end: kubenet-893741 [took: 3.589584586s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-893741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-893741
--- SKIP: TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-893741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-893741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16865-2431/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 17 Aug 2023 21:53:10 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.94.2:8443
name: pause-565601
contexts:
- context:
cluster: pause-565601
extensions:
- extension:
last-update: Thu, 17 Aug 2023 21:53:10 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-565601
name: pause-565601
current-context: pause-565601
kind: Config
preferences: {}
users:
- name: pause-565601
user:
client-certificate: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/pause-565601/client.crt
client-key: /home/jenkins/minikube-integration/16865-2431/.minikube/profiles/pause-565601/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-893741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-893741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893741"

                                                
                                                
----------------------- debugLogs end: cilium-893741 [took: 3.77191674s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-893741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-893741
--- SKIP: TestNetworkPlugins/group/cilium (3.95s)

                                                
                                    
Copied to clipboard