Test Report: Docker_Linux_containerd_arm64 16969

                    
                      e754f159ea363f9e33ad2331b33fc10ae6e501a8:2023-07-31:30375
                    
                

Test fail (9/304)

x
+
TestAddons/parallel/Ingress (36.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-315335 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-315335 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-315335 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [365c4ae6-fea9-40ee-b780-f9abb039df84] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [365c4ae6-fea9-40ee-b780-f9abb039df84] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.012190763s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-315335 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-315335 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-315335 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.055663852s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-315335 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p addons-315335 addons disable ingress-dns --alsologtostderr -v=1: (1.4969726s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-315335 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-315335 addons disable ingress --alsologtostderr -v=1: (7.777565202s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-315335
helpers_test.go:235: (dbg) docker inspect addons-315335:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6991d95d898e243240f2c20876cb8617cc8b52a3c4d84bd876c268b048628fbd",
	        "Created": "2023-07-31T10:38:30.932273622Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3622361,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T10:38:31.269766236Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/6991d95d898e243240f2c20876cb8617cc8b52a3c4d84bd876c268b048628fbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6991d95d898e243240f2c20876cb8617cc8b52a3c4d84bd876c268b048628fbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/6991d95d898e243240f2c20876cb8617cc8b52a3c4d84bd876c268b048628fbd/hosts",
	        "LogPath": "/var/lib/docker/containers/6991d95d898e243240f2c20876cb8617cc8b52a3c4d84bd876c268b048628fbd/6991d95d898e243240f2c20876cb8617cc8b52a3c4d84bd876c268b048628fbd-json.log",
	        "Name": "/addons-315335",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-315335:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-315335",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b469d69f9befa733c804e4a7d49deaee43383f8e386a462410f201c2bc991e72-init/diff:/var/lib/docker/overlay2/f6e468e16ca02ac051c3ef69ec9d67702b3bb9f63235ab1123ef1010168b87cf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b469d69f9befa733c804e4a7d49deaee43383f8e386a462410f201c2bc991e72/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b469d69f9befa733c804e4a7d49deaee43383f8e386a462410f201c2bc991e72/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b469d69f9befa733c804e4a7d49deaee43383f8e386a462410f201c2bc991e72/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-315335",
	                "Source": "/var/lib/docker/volumes/addons-315335/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-315335",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-315335",
	                "name.minikube.sigs.k8s.io": "addons-315335",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4895e1507aed968af08cb16c75898e74048f9b4d3f27e2549ee6cd4c1a8f0b0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35338"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35337"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35334"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35336"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35335"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f4895e1507ae",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-315335": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6991d95d898e",
	                        "addons-315335"
	                    ],
	                    "NetworkID": "c6cf9bd62dba46a7a938298892f1ab4ce54994148ddead8b051e38660a1c0d9d",
	                    "EndpointID": "c899439744f2f2fcd300ff0d8c71e86f7f55fdf986ebce35a681a60dd2ec3776",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-315335 -n addons-315335
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-315335 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-315335 logs -n 25: (1.467927976s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-106199   | jenkins | v1.31.1 | 31 Jul 23 10:37 UTC |                     |
	|         | -p download-only-106199        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-106199   | jenkins | v1.31.1 | 31 Jul 23 10:37 UTC |                     |
	|         | -p download-only-106199        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.1 | 31 Jul 23 10:38 UTC | 31 Jul 23 10:38 UTC |
	| delete  | -p download-only-106199        | download-only-106199   | jenkins | v1.31.1 | 31 Jul 23 10:38 UTC | 31 Jul 23 10:38 UTC |
	| delete  | -p download-only-106199        | download-only-106199   | jenkins | v1.31.1 | 31 Jul 23 10:38 UTC | 31 Jul 23 10:38 UTC |
	| start   | --download-only -p             | download-docker-358242 | jenkins | v1.31.1 | 31 Jul 23 10:38 UTC |                     |
	|         | download-docker-358242         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | -p download-docker-358242      | download-docker-358242 | jenkins | v1.31.1 | 31 Jul 23 10:38 UTC | 31 Jul 23 10:38 UTC |
	| start   | --download-only -p             | binary-mirror-830617   | jenkins | v1.31.1 | 31 Jul 23 10:38 UTC |                     |
	|         | binary-mirror-830617           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45625         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-830617        | binary-mirror-830617   | jenkins | v1.31.1 | 31 Jul 23 10:38 UTC | 31 Jul 23 10:38 UTC |
	| start   | -p addons-315335               | addons-315335          | jenkins | v1.31.1 | 31 Jul 23 10:38 UTC | 31 Jul 23 10:40 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-315335          | jenkins | v1.31.1 | 31 Jul 23 10:40 UTC | 31 Jul 23 10:40 UTC |
	|         | addons-315335                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-315335          | jenkins | v1.31.1 | 31 Jul 23 10:40 UTC | 31 Jul 23 10:40 UTC |
	|         | -p addons-315335               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-315335 ip               | addons-315335          | jenkins | v1.31.1 | 31 Jul 23 10:40 UTC | 31 Jul 23 10:40 UTC |
	| addons  | addons-315335 addons disable   | addons-315335          | jenkins | v1.31.1 | 31 Jul 23 10:40 UTC | 31 Jul 23 10:40 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-315335 addons           | addons-315335          | jenkins | v1.31.1 | 31 Jul 23 10:40 UTC | 31 Jul 23 10:40 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-315335          | jenkins | v1.31.1 | 31 Jul 23 10:40 UTC | 31 Jul 23 10:40 UTC |
	|         | addons-315335                  |                        |         |         |                     |                     |
	| ssh     | addons-315335 ssh curl -s      | addons-315335          | jenkins | v1.31.1 | 31 Jul 23 10:40 UTC | 31 Jul 23 10:40 UTC |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| ip      | addons-315335 ip               | addons-315335          | jenkins | v1.31.1 | 31 Jul 23 10:40 UTC | 31 Jul 23 10:40 UTC |
	| addons  | addons-315335 addons           | addons-315335          | jenkins | v1.31.1 | 31 Jul 23 10:40 UTC | 31 Jul 23 10:41 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-315335 addons           | addons-315335          | jenkins | v1.31.1 | 31 Jul 23 10:41 UTC | 31 Jul 23 10:41 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-315335 addons disable   | addons-315335          | jenkins | v1.31.1 | 31 Jul 23 10:41 UTC | 31 Jul 23 10:41 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-315335 addons disable   | addons-315335          | jenkins | v1.31.1 | 31 Jul 23 10:41 UTC | 31 Jul 23 10:41 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 10:38:08
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 10:38:08.883194 3621900 out.go:296] Setting OutFile to fd 1 ...
	I0731 10:38:08.883408 3621900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:38:08.883437 3621900 out.go:309] Setting ErrFile to fd 2...
	I0731 10:38:08.883460 3621900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:38:08.883740 3621900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
	I0731 10:38:08.884170 3621900 out.go:303] Setting JSON to false
	I0731 10:38:08.885039 3621900 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":66036,"bootTime":1690733853,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 10:38:08.885145 3621900 start.go:138] virtualization:  
	I0731 10:38:08.887569 3621900 out.go:177] * [addons-315335] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 10:38:08.889260 3621900 out.go:177]   - MINIKUBE_LOCATION=16969
	I0731 10:38:08.891210 3621900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:38:08.889452 3621900 notify.go:220] Checking for updates...
	I0731 10:38:08.894580 3621900 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	I0731 10:38:08.896225 3621900 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	I0731 10:38:08.897842 3621900 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 10:38:08.899589 3621900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:38:08.901421 3621900 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 10:38:08.924029 3621900 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 10:38:08.924120 3621900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:38:08.999409 3621900 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-31 10:38:08.990223534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 10:38:08.999511 3621900 docker.go:294] overlay module found
	I0731 10:38:09.002992 3621900 out.go:177] * Using the docker driver based on user configuration
	I0731 10:38:09.004987 3621900 start.go:298] selected driver: docker
	I0731 10:38:09.005014 3621900 start.go:898] validating driver "docker" against <nil>
	I0731 10:38:09.005038 3621900 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:38:09.005891 3621900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:38:09.078615 3621900 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-31 10:38:09.06845927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 10:38:09.078782 3621900 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 10:38:09.079010 3621900 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:38:09.080881 3621900 out.go:177] * Using Docker driver with root privileges
	I0731 10:38:09.082690 3621900 cni.go:84] Creating CNI manager for ""
	I0731 10:38:09.082707 3621900 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0731 10:38:09.082723 3621900 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 10:38:09.082745 3621900 start_flags.go:319] config:
	{Name:addons-315335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-315335 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 10:38:09.086108 3621900 out.go:177] * Starting control plane node addons-315335 in cluster addons-315335
	I0731 10:38:09.088062 3621900 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0731 10:38:09.089923 3621900 out.go:177] * Pulling base image ...
	I0731 10:38:09.091751 3621900 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0731 10:38:09.091800 3621900 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4
	I0731 10:38:09.091812 3621900 cache.go:57] Caching tarball of preloaded images
	I0731 10:38:09.091848 3621900 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 10:38:09.091895 3621900 preload.go:174] Found /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:38:09.091905 3621900 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on containerd
	I0731 10:38:09.092276 3621900 profile.go:148] Saving config to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/config.json ...
	I0731 10:38:09.092299 3621900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/config.json: {Name:mk3e9a3b40eee3974328ea372b3554a70c23846c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:38:09.109571 3621900 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0731 10:38:09.109717 3621900 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0731 10:38:09.109744 3621900 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0731 10:38:09.109751 3621900 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0731 10:38:09.109763 3621900 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0731 10:38:09.109773 3621900 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache
	I0731 10:38:24.507583 3621900 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball
	I0731 10:38:24.507619 3621900 cache.go:195] Successfully downloaded all kic artifacts
	I0731 10:38:24.507670 3621900 start.go:365] acquiring machines lock for addons-315335: {Name:mk8970c1973ae7ae53e6ed68f4ccdf59306cb4aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:38:24.507780 3621900 start.go:369] acquired machines lock for "addons-315335" in 87.385µs
	I0731 10:38:24.507809 3621900 start.go:93] Provisioning new machine with config: &{Name:addons-315335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-315335 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0731 10:38:24.507894 3621900 start.go:125] createHost starting for "" (driver="docker")
	I0731 10:38:24.509701 3621900 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0731 10:38:24.509923 3621900 start.go:159] libmachine.API.Create for "addons-315335" (driver="docker")
	I0731 10:38:24.509952 3621900 client.go:168] LocalClient.Create starting
	I0731 10:38:24.510034 3621900 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem
	I0731 10:38:24.678023 3621900 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem
	I0731 10:38:24.815574 3621900 cli_runner.go:164] Run: docker network inspect addons-315335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 10:38:24.838416 3621900 cli_runner.go:211] docker network inspect addons-315335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 10:38:24.838519 3621900 network_create.go:281] running [docker network inspect addons-315335] to gather additional debugging logs...
	I0731 10:38:24.838543 3621900 cli_runner.go:164] Run: docker network inspect addons-315335
	W0731 10:38:24.854998 3621900 cli_runner.go:211] docker network inspect addons-315335 returned with exit code 1
	I0731 10:38:24.855032 3621900 network_create.go:284] error running [docker network inspect addons-315335]: docker network inspect addons-315335: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-315335 not found
	I0731 10:38:24.855046 3621900 network_create.go:286] output of [docker network inspect addons-315335]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-315335 not found
	
	** /stderr **
	I0731 10:38:24.855116 3621900 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 10:38:24.872072 3621900 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40012f2820}
	I0731 10:38:24.872109 3621900 network_create.go:123] attempt to create docker network addons-315335 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0731 10:38:24.872162 3621900 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-315335 addons-315335
	I0731 10:38:24.941005 3621900 network_create.go:107] docker network addons-315335 192.168.49.0/24 created
	I0731 10:38:24.941031 3621900 kic.go:117] calculated static IP "192.168.49.2" for the "addons-315335" container
	I0731 10:38:24.941129 3621900 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 10:38:24.956104 3621900 cli_runner.go:164] Run: docker volume create addons-315335 --label name.minikube.sigs.k8s.io=addons-315335 --label created_by.minikube.sigs.k8s.io=true
	I0731 10:38:24.972725 3621900 oci.go:103] Successfully created a docker volume addons-315335
	I0731 10:38:24.972808 3621900 cli_runner.go:164] Run: docker run --rm --name addons-315335-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-315335 --entrypoint /usr/bin/test -v addons-315335:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0731 10:38:26.838088 3621900 cli_runner.go:217] Completed: docker run --rm --name addons-315335-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-315335 --entrypoint /usr/bin/test -v addons-315335:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.865239007s)
	I0731 10:38:26.838120 3621900 oci.go:107] Successfully prepared a docker volume addons-315335
	I0731 10:38:26.838144 3621900 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0731 10:38:26.838162 3621900 kic.go:190] Starting extracting preloaded images to volume ...
	I0731 10:38:26.838250 3621900 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-315335:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 10:38:30.853515 3621900 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-315335:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.01521405s)
	I0731 10:38:30.853549 3621900 kic.go:199] duration metric: took 4.015383 seconds to extract preloaded images to volume
	W0731 10:38:30.853680 3621900 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 10:38:30.853794 3621900 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 10:38:30.917483 3621900 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-315335 --name addons-315335 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-315335 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-315335 --network addons-315335 --ip 192.168.49.2 --volume addons-315335:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0731 10:38:31.278344 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Running}}
	I0731 10:38:31.308334 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:38:31.330675 3621900 cli_runner.go:164] Run: docker exec addons-315335 stat /var/lib/dpkg/alternatives/iptables
	I0731 10:38:31.417353 3621900 oci.go:144] the created container "addons-315335" has a running status.
	I0731 10:38:31.417377 3621900 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa...
	I0731 10:38:31.993273 3621900 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 10:38:32.017338 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:38:32.043349 3621900 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 10:38:32.043367 3621900 kic_runner.go:114] Args: [docker exec --privileged addons-315335 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 10:38:32.158106 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:38:32.185311 3621900 machine.go:88] provisioning docker machine ...
	I0731 10:38:32.185340 3621900 ubuntu.go:169] provisioning hostname "addons-315335"
	I0731 10:38:32.185403 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:38:32.207026 3621900 main.go:141] libmachine: Using SSH client type: native
	I0731 10:38:32.207462 3621900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35338 <nil> <nil>}
	I0731 10:38:32.207475 3621900 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-315335 && echo "addons-315335" | sudo tee /etc/hostname
	I0731 10:38:32.384968 3621900 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-315335
	
	I0731 10:38:32.385058 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:38:32.411561 3621900 main.go:141] libmachine: Using SSH client type: native
	I0731 10:38:32.412000 3621900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35338 <nil> <nil>}
	I0731 10:38:32.412023 3621900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-315335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-315335/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-315335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 10:38:32.554114 3621900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 10:38:32.554184 3621900 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16969-3616075/.minikube CaCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16969-3616075/.minikube}
	I0731 10:38:32.554229 3621900 ubuntu.go:177] setting up certificates
	I0731 10:38:32.554267 3621900 provision.go:83] configureAuth start
	I0731 10:38:32.554376 3621900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-315335
	I0731 10:38:32.572199 3621900 provision.go:138] copyHostCerts
	I0731 10:38:32.572271 3621900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem (1082 bytes)
	I0731 10:38:32.572407 3621900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem (1123 bytes)
	I0731 10:38:32.572469 3621900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem (1679 bytes)
	I0731 10:38:32.572520 3621900 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca-key.pem org=jenkins.addons-315335 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-315335]
	I0731 10:38:32.881673 3621900 provision.go:172] copyRemoteCerts
	I0731 10:38:32.881766 3621900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 10:38:32.881812 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:38:32.898726 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:38:32.991374 3621900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 10:38:33.022209 3621900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 10:38:33.051039 3621900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 10:38:33.080354 3621900 provision.go:86] duration metric: configureAuth took 526.042449ms
	I0731 10:38:33.080379 3621900 ubuntu.go:193] setting minikube options for container-runtime
	I0731 10:38:33.080571 3621900 config.go:182] Loaded profile config "addons-315335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0731 10:38:33.080579 3621900 machine.go:91] provisioned docker machine in 895.25098ms
	I0731 10:38:33.080585 3621900 client.go:171] LocalClient.Create took 8.570628212s
	I0731 10:38:33.080599 3621900 start.go:167] duration metric: libmachine.API.Create for "addons-315335" took 8.570673431s
	I0731 10:38:33.080606 3621900 start.go:300] post-start starting for "addons-315335" (driver="docker")
	I0731 10:38:33.080613 3621900 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 10:38:33.080671 3621900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 10:38:33.080714 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:38:33.098707 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:38:33.191674 3621900 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 10:38:33.195776 3621900 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 10:38:33.195845 3621900 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 10:38:33.195875 3621900 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 10:38:33.195882 3621900 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0731 10:38:33.195895 3621900 filesync.go:126] Scanning /home/jenkins/minikube-integration/16969-3616075/.minikube/addons for local assets ...
	I0731 10:38:33.195958 3621900 filesync.go:126] Scanning /home/jenkins/minikube-integration/16969-3616075/.minikube/files for local assets ...
	I0731 10:38:33.195986 3621900 start.go:303] post-start completed in 115.374524ms
	I0731 10:38:33.196297 3621900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-315335
	I0731 10:38:33.212946 3621900 profile.go:148] Saving config to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/config.json ...
	I0731 10:38:33.213246 3621900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 10:38:33.213297 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:38:33.232324 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:38:33.323026 3621900 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 10:38:33.328516 3621900 start.go:128] duration metric: createHost completed in 8.820593867s
	I0731 10:38:33.328575 3621900 start.go:83] releasing machines lock for "addons-315335", held for 8.820781978s
	I0731 10:38:33.328670 3621900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-315335
	I0731 10:38:33.345386 3621900 ssh_runner.go:195] Run: cat /version.json
	I0731 10:38:33.345439 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:38:33.345465 3621900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 10:38:33.345526 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:38:33.368085 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:38:33.375641 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:38:33.465424 3621900 ssh_runner.go:195] Run: systemctl --version
	I0731 10:38:33.608143 3621900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 10:38:33.613406 3621900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0731 10:38:33.642070 3621900 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0731 10:38:33.642150 3621900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 10:38:33.673092 3621900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 10:38:33.673150 3621900 start.go:466] detecting cgroup driver to use...
	I0731 10:38:33.673181 3621900 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 10:38:33.673233 3621900 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 10:38:33.688029 3621900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 10:38:33.700888 3621900 docker.go:196] disabling cri-docker service (if available) ...
	I0731 10:38:33.700957 3621900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 10:38:33.716108 3621900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 10:38:33.732384 3621900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 10:38:33.836189 3621900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 10:38:33.946367 3621900 docker.go:212] disabling docker service ...
	I0731 10:38:33.946447 3621900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 10:38:33.966814 3621900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 10:38:33.980126 3621900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 10:38:34.087814 3621900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 10:38:34.186336 3621900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 10:38:34.199579 3621900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 10:38:34.218423 3621900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0731 10:38:34.230315 3621900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 10:38:34.241916 3621900 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 10:38:34.242021 3621900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 10:38:34.253368 3621900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 10:38:34.265219 3621900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 10:38:34.276201 3621900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 10:38:34.287408 3621900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 10:38:34.298174 3621900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 10:38:34.310267 3621900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 10:38:34.320023 3621900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 10:38:34.330011 3621900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:38:34.424189 3621900 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 10:38:34.524259 3621900 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0731 10:38:34.524342 3621900 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0731 10:38:34.528956 3621900 start.go:534] Will wait 60s for crictl version
	I0731 10:38:34.529026 3621900 ssh_runner.go:195] Run: which crictl
	I0731 10:38:34.533585 3621900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 10:38:34.591386 3621900 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0731 10:38:34.591539 3621900 ssh_runner.go:195] Run: containerd --version
	I0731 10:38:34.620856 3621900 ssh_runner.go:195] Run: containerd --version
	I0731 10:38:34.652113 3621900 out.go:177] * Preparing Kubernetes v1.27.3 on containerd 1.6.21 ...
	I0731 10:38:34.654286 3621900 cli_runner.go:164] Run: docker network inspect addons-315335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 10:38:34.670685 3621900 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0731 10:38:34.675083 3621900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 10:38:34.688367 3621900 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0731 10:38:34.688454 3621900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 10:38:34.728731 3621900 containerd.go:604] all images are preloaded for containerd runtime.
	I0731 10:38:34.728755 3621900 containerd.go:518] Images already preloaded, skipping extraction
	I0731 10:38:34.728810 3621900 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 10:38:34.772450 3621900 containerd.go:604] all images are preloaded for containerd runtime.
	I0731 10:38:34.772473 3621900 cache_images.go:84] Images are preloaded, skipping loading
	I0731 10:38:34.772528 3621900 ssh_runner.go:195] Run: sudo crictl info
	I0731 10:38:34.813497 3621900 cni.go:84] Creating CNI manager for ""
	I0731 10:38:34.813521 3621900 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0731 10:38:34.813533 3621900 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 10:38:34.813552 3621900 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-315335 NodeName:addons-315335 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 10:38:34.813767 3621900 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-315335"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 10:38:34.813863 3621900 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-315335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-315335 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 10:38:34.813940 3621900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0731 10:38:34.824121 3621900 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 10:38:34.824188 3621900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 10:38:34.834234 3621900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0731 10:38:34.854562 3621900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 10:38:34.875197 3621900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0731 10:38:34.894850 3621900 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0731 10:38:34.899160 3621900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 10:38:34.912174 3621900 certs.go:56] Setting up /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335 for IP: 192.168.49.2
	I0731 10:38:34.912204 3621900 certs.go:190] acquiring lock for shared ca certs: {Name:mkeee59ed5ac829e33e53e6a4b7b185b15e70a1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:38:34.912353 3621900 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.key
	I0731 10:38:36.240432 3621900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.crt ...
	I0731 10:38:36.240465 3621900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.crt: {Name:mkd8a58763ec1bf8d066fc431e6225be80aad6f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:38:36.240666 3621900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.key ...
	I0731 10:38:36.240679 3621900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.key: {Name:mkc31515be4a26df46dc4f5574528907740ab622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:38:36.240770 3621900 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.key
	I0731 10:38:36.960786 3621900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.crt ...
	I0731 10:38:36.960819 3621900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.crt: {Name:mkf301793cae5eb70a393058c35c1432aef0c390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:38:36.961006 3621900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.key ...
	I0731 10:38:36.961019 3621900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.key: {Name:mk1641cfc6d90c5f9b2f5945bc579dae4422aac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:38:36.961160 3621900 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.key
	I0731 10:38:36.961178 3621900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt with IP's: []
	I0731 10:38:37.122750 3621900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt ...
	I0731 10:38:37.122778 3621900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: {Name:mk74e7daa3065b3ee92581bf3311dcf898d614fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:38:37.122951 3621900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.key ...
	I0731 10:38:37.122962 3621900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.key: {Name:mkb2415b555bef0fc132809d549ae0bfd8151be6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:38:37.123041 3621900 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/apiserver.key.dd3b5fb2
	I0731 10:38:37.123059 3621900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0731 10:38:37.535823 3621900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/apiserver.crt.dd3b5fb2 ...
	I0731 10:38:37.535854 3621900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/apiserver.crt.dd3b5fb2: {Name:mkc8417bb8fc838a863354fc8ae88601a47cfdc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:38:37.536038 3621900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/apiserver.key.dd3b5fb2 ...
	I0731 10:38:37.536051 3621900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/apiserver.key.dd3b5fb2: {Name:mkc28e08a6bcb5ef930551d0c54112c72dfaad93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:38:37.536130 3621900 certs.go:337] copying /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/apiserver.crt
	I0731 10:38:37.536200 3621900 certs.go:341] copying /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/apiserver.key
	I0731 10:38:37.536247 3621900 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/proxy-client.key
	I0731 10:38:37.536265 3621900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/proxy-client.crt with IP's: []
	I0731 10:38:38.086294 3621900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/proxy-client.crt ...
	I0731 10:38:38.086336 3621900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/proxy-client.crt: {Name:mke490ae115cdd8369d57458324662f38ac123e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:38:38.086535 3621900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/proxy-client.key ...
	I0731 10:38:38.086551 3621900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/proxy-client.key: {Name:mk517d86d0c73b2c681331ad886e794c80cf4934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:38:38.086739 3621900 certs.go:437] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 10:38:38.086780 3621900 certs.go:437] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem (1082 bytes)
	I0731 10:38:38.086810 3621900 certs.go:437] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem (1123 bytes)
	I0731 10:38:38.086838 3621900 certs.go:437] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem (1679 bytes)
	I0731 10:38:38.087429 3621900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 10:38:38.116732 3621900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 10:38:38.143895 3621900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 10:38:38.171722 3621900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 10:38:38.200144 3621900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 10:38:38.228282 3621900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 10:38:38.255076 3621900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 10:38:38.281695 3621900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 10:38:38.308837 3621900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 10:38:38.336269 3621900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 10:38:38.357003 3621900 ssh_runner.go:195] Run: openssl version
	I0731 10:38:38.364235 3621900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 10:38:38.375690 3621900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:38:38.380129 3621900 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:38:38.380190 3621900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:38:38.388571 3621900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 10:38:38.400108 3621900 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 10:38:38.404240 3621900 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 10:38:38.404285 3621900 kubeadm.go:404] StartCluster: {Name:addons-315335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-315335 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 10:38:38.404373 3621900 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0731 10:38:38.404432 3621900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 10:38:38.446123 3621900 cri.go:89] found id: ""
	I0731 10:38:38.446238 3621900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 10:38:38.456670 3621900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 10:38:38.467029 3621900 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0731 10:38:38.467095 3621900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 10:38:38.477966 3621900 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 10:38:38.478007 3621900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0731 10:38:38.530590 3621900 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0731 10:38:38.530867 3621900 kubeadm.go:322] [preflight] Running pre-flight checks
	I0731 10:38:38.578589 3621900 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0731 10:38:38.578667 3621900 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1040-aws
	I0731 10:38:38.578712 3621900 kubeadm.go:322] OS: Linux
	I0731 10:38:38.578760 3621900 kubeadm.go:322] CGROUPS_CPU: enabled
	I0731 10:38:38.578816 3621900 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0731 10:38:38.578872 3621900 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0731 10:38:38.578922 3621900 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0731 10:38:38.578976 3621900 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0731 10:38:38.579035 3621900 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0731 10:38:38.579089 3621900 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0731 10:38:38.579137 3621900 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0731 10:38:38.579190 3621900 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0731 10:38:38.665987 3621900 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 10:38:38.666108 3621900 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 10:38:38.666201 3621900 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 10:38:38.909506 3621900 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 10:38:38.911601 3621900 out.go:204]   - Generating certificates and keys ...
	I0731 10:38:38.911708 3621900 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0731 10:38:38.911800 3621900 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0731 10:38:39.298718 3621900 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 10:38:39.724685 3621900 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0731 10:38:40.080622 3621900 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0731 10:38:40.367843 3621900 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0731 10:38:41.104001 3621900 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0731 10:38:41.104431 3621900 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-315335 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 10:38:41.295447 3621900 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0731 10:38:41.295843 3621900 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-315335 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 10:38:41.823765 3621900 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 10:38:42.317785 3621900 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 10:38:42.671118 3621900 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0731 10:38:42.671458 3621900 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 10:38:43.566237 3621900 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 10:38:44.471728 3621900 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 10:38:44.681789 3621900 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 10:38:45.709165 3621900 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 10:38:45.723315 3621900 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 10:38:45.724145 3621900 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 10:38:45.724407 3621900 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0731 10:38:45.834258 3621900 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 10:38:45.836319 3621900 out.go:204]   - Booting up control plane ...
	I0731 10:38:45.836410 3621900 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 10:38:45.836516 3621900 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 10:38:45.836581 3621900 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 10:38:45.837680 3621900 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 10:38:45.840496 3621900 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 10:38:52.843046 3621900 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002032 seconds
	I0731 10:38:52.843163 3621900 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 10:38:52.859962 3621900 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 10:38:53.391604 3621900 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 10:38:53.391809 3621900 kubeadm.go:322] [mark-control-plane] Marking the node addons-315335 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 10:38:53.903111 3621900 kubeadm.go:322] [bootstrap-token] Using token: 22u2s8.8jgclnjdrq0agwex
	I0731 10:38:53.904936 3621900 out.go:204]   - Configuring RBAC rules ...
	I0731 10:38:53.905054 3621900 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 10:38:53.910482 3621900 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 10:38:53.917779 3621900 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 10:38:53.922533 3621900 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 10:38:53.928971 3621900 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 10:38:53.932813 3621900 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 10:38:53.946575 3621900 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 10:38:54.187091 3621900 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0731 10:38:54.315823 3621900 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0731 10:38:54.317478 3621900 kubeadm.go:322] 
	I0731 10:38:54.317547 3621900 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0731 10:38:54.317559 3621900 kubeadm.go:322] 
	I0731 10:38:54.317631 3621900 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0731 10:38:54.317640 3621900 kubeadm.go:322] 
	I0731 10:38:54.317665 3621900 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0731 10:38:54.317724 3621900 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 10:38:54.317782 3621900 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 10:38:54.317791 3621900 kubeadm.go:322] 
	I0731 10:38:54.317841 3621900 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0731 10:38:54.317850 3621900 kubeadm.go:322] 
	I0731 10:38:54.317894 3621900 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 10:38:54.317902 3621900 kubeadm.go:322] 
	I0731 10:38:54.317951 3621900 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0731 10:38:54.318024 3621900 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 10:38:54.318091 3621900 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 10:38:54.318099 3621900 kubeadm.go:322] 
	I0731 10:38:54.318178 3621900 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 10:38:54.318253 3621900 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0731 10:38:54.318262 3621900 kubeadm.go:322] 
	I0731 10:38:54.318563 3621900 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 22u2s8.8jgclnjdrq0agwex \
	I0731 10:38:54.318687 3621900 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:86a59b46a66ac234bd53b6c72750e3c62130510b828ccfbf571d11f4fbb3f8f1 \
	I0731 10:38:54.318712 3621900 kubeadm.go:322] 	--control-plane 
	I0731 10:38:54.318724 3621900 kubeadm.go:322] 
	I0731 10:38:54.318819 3621900 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0731 10:38:54.318829 3621900 kubeadm.go:322] 
	I0731 10:38:54.318919 3621900 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 22u2s8.8jgclnjdrq0agwex \
	I0731 10:38:54.319023 3621900 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:86a59b46a66ac234bd53b6c72750e3c62130510b828ccfbf571d11f4fbb3f8f1 
	I0731 10:38:54.324380 3621900 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1040-aws\n", err: exit status 1
	I0731 10:38:54.324492 3621900 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 10:38:54.324506 3621900 cni.go:84] Creating CNI manager for ""
	I0731 10:38:54.324513 3621900 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0731 10:38:54.328016 3621900 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 10:38:54.329661 3621900 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 10:38:54.335144 3621900 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0731 10:38:54.335158 3621900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 10:38:54.366474 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 10:38:55.259402 3621900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 10:38:55.259490 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:38:55.259534 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=a7848ba25aaaad8ebb50e721c0d343e471188fc7 minikube.k8s.io/name=addons-315335 minikube.k8s.io/updated_at=2023_07_31T10_38_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:38:55.412481 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:38:55.412545 3621900 ops.go:34] apiserver oom_adj: -16
	I0731 10:38:55.549488 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:38:56.141166 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:38:56.640618 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:38:57.141413 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:38:57.640666 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:38:58.141234 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:38:58.641627 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:38:59.141515 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:38:59.641193 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:00.141205 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:00.640607 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:01.140642 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:01.641485 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:02.140697 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:02.640700 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:03.140632 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:03.641413 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:04.140633 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:04.641215 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:05.140796 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:05.640745 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:06.140912 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:06.641049 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:07.141089 3621900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:39:07.317774 3621900 kubeadm.go:1081] duration metric: took 12.058348716s to wait for elevateKubeSystemPrivileges.
	I0731 10:39:07.317801 3621900 kubeadm.go:406] StartCluster complete in 28.913519097s
	I0731 10:39:07.317817 3621900 settings.go:142] acquiring lock: {Name:mk7385413106a9bc6c5ba9de86edde2c8dc9b1b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:39:07.317920 3621900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16969-3616075/kubeconfig
	I0731 10:39:07.318300 3621900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/kubeconfig: {Name:mkbf88964f408983a815b4e4688fb8f882a1e0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:39:07.320630 3621900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 10:39:07.321122 3621900 config.go:182] Loaded profile config "addons-315335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0731 10:39:07.321162 3621900 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0731 10:39:07.321232 3621900 addons.go:69] Setting volumesnapshots=true in profile "addons-315335"
	I0731 10:39:07.321245 3621900 addons.go:231] Setting addon volumesnapshots=true in "addons-315335"
	I0731 10:39:07.321302 3621900 host.go:66] Checking if "addons-315335" exists ...
	I0731 10:39:07.321743 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:39:07.322110 3621900 addons.go:69] Setting ingress=true in profile "addons-315335"
	I0731 10:39:07.322130 3621900 addons.go:231] Setting addon ingress=true in "addons-315335"
	I0731 10:39:07.322166 3621900 host.go:66] Checking if "addons-315335" exists ...
	I0731 10:39:07.322579 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:39:07.322690 3621900 addons.go:69] Setting ingress-dns=true in profile "addons-315335"
	I0731 10:39:07.322702 3621900 addons.go:231] Setting addon ingress-dns=true in "addons-315335"
	I0731 10:39:07.322731 3621900 host.go:66] Checking if "addons-315335" exists ...
	I0731 10:39:07.323089 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:39:07.323152 3621900 addons.go:69] Setting inspektor-gadget=true in profile "addons-315335"
	I0731 10:39:07.323163 3621900 addons.go:231] Setting addon inspektor-gadget=true in "addons-315335"
	I0731 10:39:07.323186 3621900 host.go:66] Checking if "addons-315335" exists ...
	I0731 10:39:07.323534 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:39:07.323586 3621900 addons.go:69] Setting metrics-server=true in profile "addons-315335"
	I0731 10:39:07.323595 3621900 addons.go:231] Setting addon metrics-server=true in "addons-315335"
	I0731 10:39:07.323616 3621900 host.go:66] Checking if "addons-315335" exists ...
	I0731 10:39:07.323962 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:39:07.324021 3621900 addons.go:69] Setting registry=true in profile "addons-315335"
	I0731 10:39:07.324029 3621900 addons.go:231] Setting addon registry=true in "addons-315335"
	I0731 10:39:07.324051 3621900 host.go:66] Checking if "addons-315335" exists ...
	I0731 10:39:07.324392 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:39:07.324456 3621900 addons.go:69] Setting storage-provisioner=true in profile "addons-315335"
	I0731 10:39:07.324465 3621900 addons.go:231] Setting addon storage-provisioner=true in "addons-315335"
	I0731 10:39:07.324486 3621900 host.go:66] Checking if "addons-315335" exists ...
	I0731 10:39:07.324843 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:39:07.324927 3621900 addons.go:69] Setting cloud-spanner=true in profile "addons-315335"
	I0731 10:39:07.324943 3621900 addons.go:231] Setting addon cloud-spanner=true in "addons-315335"
	I0731 10:39:07.324985 3621900 host.go:66] Checking if "addons-315335" exists ...
	I0731 10:39:07.325452 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:39:07.325606 3621900 addons.go:69] Setting default-storageclass=true in profile "addons-315335"
	I0731 10:39:07.325642 3621900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-315335"
	I0731 10:39:07.326836 3621900 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-315335"
	I0731 10:39:07.326889 3621900 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-315335"
	I0731 10:39:07.326920 3621900 host.go:66] Checking if "addons-315335" exists ...
	I0731 10:39:07.327323 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:39:07.331385 3621900 addons.go:69] Setting gcp-auth=true in profile "addons-315335"
	I0731 10:39:07.331409 3621900 mustload.go:65] Loading cluster: addons-315335
	I0731 10:39:07.331628 3621900 config.go:182] Loaded profile config "addons-315335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0731 10:39:07.331931 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:39:07.338766 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:39:07.427833 3621900 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 10:39:07.484773 3621900 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 10:39:07.484803 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 10:39:07.497935 3621900 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0731 10:39:07.500890 3621900 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0731 10:39:07.498992 3621900 host.go:66] Checking if "addons-315335" exists ...
	I0731 10:39:07.497922 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:39:07.524217 3621900 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0731 10:39:07.533385 3621900 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 10:39:07.533453 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 10:39:07.533555 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:39:07.538052 3621900 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0731 10:39:07.522791 3621900 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.18.1
	I0731 10:39:07.542965 3621900 out.go:177]   - Using image docker.io/registry:2.8.1
	I0731 10:39:07.541611 3621900 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 10:39:07.552009 3621900 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 10:39:07.552023 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 10:39:07.552090 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:39:07.553889 3621900 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0731 10:39:07.549057 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0731 10:39:07.551372 3621900 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-315335" context rescaled to 1 replicas
	I0731 10:39:07.556782 3621900 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0731 10:39:07.558769 3621900 out.go:177] * Verifying Kubernetes components...
	I0731 10:39:07.557218 3621900 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 10:39:07.557299 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:39:07.562198 3621900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 10:39:07.562436 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0731 10:39:07.562587 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:39:07.577176 3621900 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0731 10:39:07.579796 3621900 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 10:39:07.579815 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 10:39:07.579877 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:39:07.582081 3621900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:39:07.584335 3621900 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:39:07.584358 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 10:39:07.584424 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:39:07.624375 3621900 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	I0731 10:39:07.626708 3621900 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0731 10:39:07.626729 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0731 10:39:07.626793 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:39:07.662868 3621900 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 10:39:07.666973 3621900 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 10:39:07.694752 3621900 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 10:39:07.697036 3621900 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 10:39:07.699030 3621900 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 10:39:07.700875 3621900 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 10:39:07.706493 3621900 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 10:39:07.708548 3621900 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 10:39:07.712475 3621900 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 10:39:07.712497 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 10:39:07.712559 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:39:07.714883 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:39:07.733391 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:39:07.745578 3621900 addons.go:231] Setting addon default-storageclass=true in "addons-315335"
	I0731 10:39:07.745620 3621900 host.go:66] Checking if "addons-315335" exists ...
	I0731 10:39:07.746068 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:39:07.809532 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:39:07.811511 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:39:07.823553 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:39:07.831784 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:39:07.858517 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:39:07.860441 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:39:07.861702 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:39:07.885603 3621900 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 10:39:07.885621 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 10:39:07.885678 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:39:07.917476 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:39:07.945679 3621900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 10:39:07.946831 3621900 node_ready.go:35] waiting up to 6m0s for node "addons-315335" to be "Ready" ...
	I0731 10:39:07.950005 3621900 node_ready.go:49] node "addons-315335" has status "Ready":"True"
	I0731 10:39:07.950023 3621900 node_ready.go:38] duration metric: took 3.142684ms waiting for node "addons-315335" to be "Ready" ...
	I0731 10:39:07.950031 3621900 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 10:39:07.959157 3621900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ggjwv" in "kube-system" namespace to be "Ready" ...
	I0731 10:39:08.262617 3621900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 10:39:08.274519 3621900 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 10:39:08.274588 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 10:39:08.452502 3621900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:39:08.456474 3621900 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 10:39:08.456538 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 10:39:08.494023 3621900 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 10:39:08.494091 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 10:39:08.496900 3621900 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 10:39:08.496950 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 10:39:08.506533 3621900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 10:39:08.547018 3621900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 10:39:08.554660 3621900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 10:39:08.554722 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 10:39:08.565822 3621900 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 10:39:08.565891 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 10:39:08.583941 3621900 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 10:39:08.584008 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 10:39:08.619173 3621900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 10:39:08.645327 3621900 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 10:39:08.645391 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 10:39:08.683958 3621900 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 10:39:08.684020 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 10:39:08.742930 3621900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 10:39:08.742995 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 10:39:08.792677 3621900 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 10:39:08.792734 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 10:39:08.804253 3621900 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 10:39:08.804328 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 10:39:08.830855 3621900 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 10:39:08.830923 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 10:39:08.915845 3621900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 10:39:09.042774 3621900 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 10:39:09.042800 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 10:39:09.075186 3621900 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 10:39:09.075211 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 10:39:09.081605 3621900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 10:39:09.081639 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 10:39:09.123853 3621900 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 10:39:09.123875 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 10:39:09.239131 3621900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 10:39:09.308353 3621900 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 10:39:09.308378 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 10:39:09.344961 3621900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 10:39:09.348277 3621900 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 10:39:09.348338 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 10:39:09.476079 3621900 pod_ready.go:97] error getting pod "coredns-5d78c9869d-ggjwv" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-ggjwv" not found
	I0731 10:39:09.476149 3621900 pod_ready.go:81] duration metric: took 1.516921871s waiting for pod "coredns-5d78c9869d-ggjwv" in "kube-system" namespace to be "Ready" ...
	E0731 10:39:09.476174 3621900 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5d78c9869d-ggjwv" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-ggjwv" not found
	I0731 10:39:09.476197 3621900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace to be "Ready" ...
	I0731 10:39:09.597666 3621900 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 10:39:09.597735 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 10:39:09.635660 3621900 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 10:39:09.635731 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 10:39:09.731850 3621900 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 10:39:09.731920 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 10:39:09.826614 3621900 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 10:39:09.826684 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 10:39:09.867337 3621900 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 10:39:09.867405 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 10:39:10.020812 3621900 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 10:39:10.020885 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 10:39:10.038621 3621900 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 10:39:10.038697 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0731 10:39:10.159736 3621900 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.214011577s)
	I0731 10:39:10.159811 3621900 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0731 10:39:10.172046 3621900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.909358521s)
	I0731 10:39:10.268812 3621900 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 10:39:10.268837 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 10:39:10.298767 3621900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 10:39:10.422468 3621900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 10:39:11.413362 3621900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.960790002s)
	I0731 10:39:11.413436 3621900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.90684957s)
	I0731 10:39:11.541845 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:13.913811 3621900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.366723334s)
	I0731 10:39:13.913847 3621900 addons.go:467] Verifying addon ingress=true in "addons-315335"
	I0731 10:39:13.916010 3621900 out.go:177] * Verifying ingress addon...
	I0731 10:39:13.913978 3621900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.294748491s)
	I0731 10:39:13.914008 3621900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.998103558s)
	I0731 10:39:13.914104 3621900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.674939511s)
	I0731 10:39:13.914174 3621900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.569143382s)
	I0731 10:39:13.914230 3621900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.615435447s)
	I0731 10:39:13.918638 3621900 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 10:39:13.918833 3621900 addons.go:467] Verifying addon registry=true in "addons-315335"
	I0731 10:39:13.922028 3621900 out.go:177] * Verifying registry addon...
	W0731 10:39:13.918984 3621900 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 10:39:13.918995 3621900 addons.go:467] Verifying addon metrics-server=true in "addons-315335"
	I0731 10:39:13.925577 3621900 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 10:39:13.922174 3621900 retry.go:31] will retry after 296.452817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 10:39:13.923405 3621900 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 10:39:13.925675 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:13.931212 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:13.932979 3621900 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 10:39:13.933019 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:13.936569 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:14.001441 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:14.222930 3621900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 10:39:14.331757 3621900 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 10:39:14.331832 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:39:14.364318 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:39:14.441969 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:14.443140 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:14.781390 3621900 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 10:39:14.906667 3621900 addons.go:231] Setting addon gcp-auth=true in "addons-315335"
	I0731 10:39:14.906754 3621900 host.go:66] Checking if "addons-315335" exists ...
	I0731 10:39:14.907227 3621900 cli_runner.go:164] Run: docker container inspect addons-315335 --format={{.State.Status}}
	I0731 10:39:14.938224 3621900 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 10:39:14.938271 3621900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-315335
	I0731 10:39:14.987123 3621900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35338 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/addons-315335/id_rsa Username:docker}
	I0731 10:39:15.005625 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:15.007402 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:15.085593 3621900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.663055494s)
	I0731 10:39:15.085652 3621900 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-315335"
	I0731 10:39:15.088521 3621900 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 10:39:15.091869 3621900 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 10:39:15.148300 3621900 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 10:39:15.148320 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:15.168246 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:15.435993 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:15.446044 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:15.675339 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:15.949286 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:15.950069 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:16.010789 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:16.081285 3621900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.858259993s)
	I0731 10:39:16.081389 3621900 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.14314682s)
	I0731 10:39:16.089433 3621900 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0731 10:39:16.091972 3621900 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0731 10:39:16.094016 3621900 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 10:39:16.094072 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 10:39:16.122082 3621900 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 10:39:16.122146 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 10:39:16.143897 3621900 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 10:39:16.143975 3621900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0731 10:39:16.167533 3621900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 10:39:16.174656 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:16.437002 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:16.441683 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:16.674807 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:16.964293 3621900 addons.go:467] Verifying addon gcp-auth=true in "addons-315335"
	I0731 10:39:16.967998 3621900 out.go:177] * Verifying gcp-auth addon...
	I0731 10:39:16.970778 3621900 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 10:39:16.998702 3621900 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 10:39:16.998763 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:17.000825 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:17.001856 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:17.007892 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:17.173840 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:17.435752 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:17.441902 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:17.511892 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:17.675255 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:17.935990 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:17.941838 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:18.012873 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:18.175068 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:18.436475 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:18.441247 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:18.501636 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:18.512484 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:18.675207 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:18.936135 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:18.942164 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:19.014115 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:19.174595 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:19.436437 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:19.442042 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:19.511781 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:19.675681 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:19.936991 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:19.941591 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:20.013852 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:20.175008 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:20.437172 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:20.441919 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:20.512533 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:20.674942 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:20.937312 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:20.942070 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:21.000875 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:21.012179 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:21.182120 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:21.435707 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:21.441225 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:21.511874 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:21.674332 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:21.936161 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:21.941771 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:22.012419 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:22.185165 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:22.436360 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:22.442653 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:22.512471 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:22.675072 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:22.936373 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:22.942265 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:23.001726 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:23.012655 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:23.174759 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:23.437046 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:23.441693 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:23.512768 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:23.673969 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:23.936446 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:23.941818 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:24.029791 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:24.174500 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:24.436188 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:24.441613 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:24.512454 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:24.675687 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:24.936678 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:24.941869 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:25.012668 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:25.173753 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:25.436269 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:25.441573 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:25.500801 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:25.512366 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:25.675651 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:25.936179 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:25.941719 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:26.011853 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:26.182631 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:26.437558 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:26.441170 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:26.511421 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:26.674583 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:26.936168 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:26.941264 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:27.011811 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:27.173978 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:27.437267 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:27.441575 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:27.501280 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:27.512039 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:27.674134 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:27.944465 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:27.945234 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:28.012512 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:28.175086 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:28.435775 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:28.441300 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:28.512290 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:28.673995 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:28.936290 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:28.941779 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:29.012141 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:29.174086 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:29.436379 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:29.441926 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:29.511628 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:29.673857 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:29.935454 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:29.941769 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:30.007874 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:30.013836 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:30.174606 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:30.435645 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:30.440888 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:30.511704 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:30.673568 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:30.936017 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:30.941864 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:31.012162 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:31.174605 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:31.435997 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:31.441358 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:31.512169 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:31.674031 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:31.936199 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:31.941224 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:32.012030 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:32.174002 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:32.436831 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:32.440663 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:32.502352 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:32.511347 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:32.674312 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:32.936583 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:32.940783 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:33.011958 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:33.174147 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:33.436295 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:33.441536 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:33.512580 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:33.673806 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:33.936356 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:33.941337 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:34.012277 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:34.174569 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:34.436022 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:34.441426 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:34.511678 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:34.674678 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:34.936595 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:34.944939 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:35.004262 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:35.012356 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:35.174307 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:35.435826 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:35.440999 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:35.511598 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:35.674185 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:35.936533 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:35.942036 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:36.012228 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:36.174054 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:36.437370 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:36.441076 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:36.511735 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:36.674034 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:36.936185 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:36.941028 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:37.012498 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:37.174409 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:37.435774 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:37.441031 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:37.500257 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:37.511271 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:37.673864 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:37.936063 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:37.941426 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:38.012717 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:38.176479 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:38.435677 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:38.441647 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:38.511101 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:38.673319 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:38.936520 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:38.944095 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:39.012216 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:39.174242 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:39.436272 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:39.441732 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:39.500902 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:39.511260 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:39.673452 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:39.936036 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:39.941464 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:40.012125 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:40.173621 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:40.435949 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:40.441468 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:40.512640 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:40.676503 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:40.936604 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:40.941542 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:41.012695 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:41.179845 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:41.435981 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:41.442221 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:41.504748 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:41.513723 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:41.674812 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:41.936083 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:41.941618 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:42.011757 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:42.174460 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:42.435978 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:42.441712 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:42.511840 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:42.676522 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:42.941270 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:42.945617 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:43.011478 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:43.174890 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:43.436723 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:43.441160 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:43.511960 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:43.676769 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:43.939499 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:43.942269 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:44.001653 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:44.013055 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:44.174268 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:44.436150 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:44.441835 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:44.512099 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:44.674622 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:44.937271 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:44.945132 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:45.016741 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:45.179212 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:45.437732 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:45.442388 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:45.515371 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:45.675125 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:45.941661 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:45.945912 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:46.007409 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:46.015300 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:46.175202 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:46.435879 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:46.442164 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:46.512602 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:46.674906 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:46.936151 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:46.941382 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:47.012106 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:47.173497 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:47.435960 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:47.446714 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:47.512030 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:47.674114 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:47.940571 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:47.943279 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:48.012504 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:48.175060 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:48.435602 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:48.441394 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:48.500380 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:48.511176 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:48.674026 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:48.938932 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:48.943360 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:49.011751 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:49.175085 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:49.437667 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:49.442351 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:49.512946 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:49.675224 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:49.936696 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:49.940855 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:50.012346 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:50.174475 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:50.435930 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:50.441159 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:50.500539 3621900 pod_ready.go:102] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"False"
	I0731 10:39:50.512277 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:50.675338 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:50.937914 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:50.944251 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:51.016758 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:51.174969 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:51.436823 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:51.441753 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:51.511856 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:51.675437 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:51.954884 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:51.955679 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:52.020012 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:52.031916 3621900 pod_ready.go:92] pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace has status "Ready":"True"
	I0731 10:39:52.031942 3621900 pod_ready.go:81] duration metric: took 42.555706132s waiting for pod "coredns-5d78c9869d-n7rzq" in "kube-system" namespace to be "Ready" ...
	I0731 10:39:52.031955 3621900 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-315335" in "kube-system" namespace to be "Ready" ...
	I0731 10:39:52.049955 3621900 pod_ready.go:92] pod "etcd-addons-315335" in "kube-system" namespace has status "Ready":"True"
	I0731 10:39:52.049981 3621900 pod_ready.go:81] duration metric: took 18.018847ms waiting for pod "etcd-addons-315335" in "kube-system" namespace to be "Ready" ...
	I0731 10:39:52.049998 3621900 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-315335" in "kube-system" namespace to be "Ready" ...
	I0731 10:39:52.060219 3621900 pod_ready.go:92] pod "kube-apiserver-addons-315335" in "kube-system" namespace has status "Ready":"True"
	I0731 10:39:52.060244 3621900 pod_ready.go:81] duration metric: took 10.23899ms waiting for pod "kube-apiserver-addons-315335" in "kube-system" namespace to be "Ready" ...
	I0731 10:39:52.060257 3621900 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-315335" in "kube-system" namespace to be "Ready" ...
	I0731 10:39:52.071911 3621900 pod_ready.go:92] pod "kube-controller-manager-addons-315335" in "kube-system" namespace has status "Ready":"True"
	I0731 10:39:52.071940 3621900 pod_ready.go:81] duration metric: took 11.675558ms waiting for pod "kube-controller-manager-addons-315335" in "kube-system" namespace to be "Ready" ...
	I0731 10:39:52.071952 3621900 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bgbb9" in "kube-system" namespace to be "Ready" ...
	I0731 10:39:52.077614 3621900 pod_ready.go:92] pod "kube-proxy-bgbb9" in "kube-system" namespace has status "Ready":"True"
	I0731 10:39:52.077636 3621900 pod_ready.go:81] duration metric: took 5.675892ms waiting for pod "kube-proxy-bgbb9" in "kube-system" namespace to be "Ready" ...
	I0731 10:39:52.077647 3621900 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-315335" in "kube-system" namespace to be "Ready" ...
	I0731 10:39:52.174179 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:52.398903 3621900 pod_ready.go:92] pod "kube-scheduler-addons-315335" in "kube-system" namespace has status "Ready":"True"
	I0731 10:39:52.398928 3621900 pod_ready.go:81] duration metric: took 321.271883ms waiting for pod "kube-scheduler-addons-315335" in "kube-system" namespace to be "Ready" ...
	I0731 10:39:52.398937 3621900 pod_ready.go:38] duration metric: took 44.448896686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 10:39:52.398953 3621900 api_server.go:52] waiting for apiserver process to appear ...
	I0731 10:39:52.399006 3621900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:39:52.421828 3621900 api_server.go:72] duration metric: took 44.864991939s to wait for apiserver process to appear ...
	I0731 10:39:52.421899 3621900 api_server.go:88] waiting for apiserver healthz status ...
	I0731 10:39:52.421930 3621900 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0731 10:39:52.431523 3621900 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0731 10:39:52.433469 3621900 api_server.go:141] control plane version: v1.27.3
	I0731 10:39:52.433527 3621900 api_server.go:131] duration metric: took 11.608145ms to wait for apiserver health ...
	I0731 10:39:52.433549 3621900 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 10:39:52.437775 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:52.441363 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:52.512652 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:52.607340 3621900 system_pods.go:59] 17 kube-system pods found
	I0731 10:39:52.607409 3621900 system_pods.go:61] "coredns-5d78c9869d-n7rzq" [d5cacbaa-7aae-4286-8f7e-af14d7719a8f] Running
	I0731 10:39:52.607434 3621900 system_pods.go:61] "csi-hostpath-attacher-0" [649fdc39-7bc5-42c5-9fd4-bf5639a06b7a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 10:39:52.607459 3621900 system_pods.go:61] "csi-hostpath-resizer-0" [04b1ca48-df5b-4163-a07d-a29d57c08122] Running
	I0731 10:39:52.607493 3621900 system_pods.go:61] "csi-hostpathplugin-jbdxd" [aab5b10a-cbd8-4d5f-9a63-0554e2ee9648] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 10:39:52.607524 3621900 system_pods.go:61] "etcd-addons-315335" [ad11d9ec-05d7-4851-bc81-1fe81d51f811] Running
	I0731 10:39:52.607545 3621900 system_pods.go:61] "kindnet-wmmnw" [faed146c-8218-403e-91a7-94ef28d3e2dc] Running
	I0731 10:39:52.607571 3621900 system_pods.go:61] "kube-apiserver-addons-315335" [e3eeedbf-f5de-46d9-bb4b-a5456181cec0] Running
	I0731 10:39:52.607604 3621900 system_pods.go:61] "kube-controller-manager-addons-315335" [cd976ab3-38b0-44f4-8977-42b3cb5ba3b1] Running
	I0731 10:39:52.607636 3621900 system_pods.go:61] "kube-ingress-dns-minikube" [84076644-76d1-498a-be93-d967c34530cd] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0731 10:39:52.607661 3621900 system_pods.go:61] "kube-proxy-bgbb9" [42b37ce1-1641-4497-a4db-f68efb09d84d] Running
	I0731 10:39:52.607699 3621900 system_pods.go:61] "kube-scheduler-addons-315335" [11084dc8-91e4-43a9-8c93-a39062e9bc85] Running
	I0731 10:39:52.607724 3621900 system_pods.go:61] "metrics-server-7746886d4f-zmc78" [b317a770-8561-4b60-aded-a636d40c178a] Running
	I0731 10:39:52.607754 3621900 system_pods.go:61] "registry-proxy-pww6s" [f217c09b-8dba-4647-baa8-07ab108407df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 10:39:52.607776 3621900 system_pods.go:61] "registry-v9cxf" [03de58e9-3f67-44c3-8965-868a902feada] Running
	I0731 10:39:52.607809 3621900 system_pods.go:61] "snapshot-controller-75bbb956b9-4k4x7" [56d4705d-3345-4bd1-b2ac-94807ddc01c5] Running
	I0731 10:39:52.607835 3621900 system_pods.go:61] "snapshot-controller-75bbb956b9-w2gcc" [d04bfd3f-b2f4-41d6-a27a-2abc090781c5] Running
	I0731 10:39:52.607860 3621900 system_pods.go:61] "storage-provisioner" [933cd5c9-033f-4be8-9150-57b2b4a02119] Running
	I0731 10:39:52.607899 3621900 system_pods.go:74] duration metric: took 174.328475ms to wait for pod list to return data ...
	I0731 10:39:52.607922 3621900 default_sa.go:34] waiting for default service account to be created ...
	I0731 10:39:52.674485 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:52.800047 3621900 default_sa.go:45] found service account: "default"
	I0731 10:39:52.800116 3621900 default_sa.go:55] duration metric: took 192.162026ms for default service account to be created ...
	I0731 10:39:52.800139 3621900 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 10:39:52.935846 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:52.941312 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:53.005909 3621900 system_pods.go:86] 17 kube-system pods found
	I0731 10:39:53.005981 3621900 system_pods.go:89] "coredns-5d78c9869d-n7rzq" [d5cacbaa-7aae-4286-8f7e-af14d7719a8f] Running
	I0731 10:39:53.006006 3621900 system_pods.go:89] "csi-hostpath-attacher-0" [649fdc39-7bc5-42c5-9fd4-bf5639a06b7a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 10:39:53.006029 3621900 system_pods.go:89] "csi-hostpath-resizer-0" [04b1ca48-df5b-4163-a07d-a29d57c08122] Running
	I0731 10:39:53.006065 3621900 system_pods.go:89] "csi-hostpathplugin-jbdxd" [aab5b10a-cbd8-4d5f-9a63-0554e2ee9648] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 10:39:53.006092 3621900 system_pods.go:89] "etcd-addons-315335" [ad11d9ec-05d7-4851-bc81-1fe81d51f811] Running
	I0731 10:39:53.006114 3621900 system_pods.go:89] "kindnet-wmmnw" [faed146c-8218-403e-91a7-94ef28d3e2dc] Running
	I0731 10:39:53.006138 3621900 system_pods.go:89] "kube-apiserver-addons-315335" [e3eeedbf-f5de-46d9-bb4b-a5456181cec0] Running
	I0731 10:39:53.006170 3621900 system_pods.go:89] "kube-controller-manager-addons-315335" [cd976ab3-38b0-44f4-8977-42b3cb5ba3b1] Running
	I0731 10:39:53.006197 3621900 system_pods.go:89] "kube-ingress-dns-minikube" [84076644-76d1-498a-be93-d967c34530cd] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0731 10:39:53.006220 3621900 system_pods.go:89] "kube-proxy-bgbb9" [42b37ce1-1641-4497-a4db-f68efb09d84d] Running
	I0731 10:39:53.006245 3621900 system_pods.go:89] "kube-scheduler-addons-315335" [11084dc8-91e4-43a9-8c93-a39062e9bc85] Running
	I0731 10:39:53.006275 3621900 system_pods.go:89] "metrics-server-7746886d4f-zmc78" [b317a770-8561-4b60-aded-a636d40c178a] Running
	I0731 10:39:53.006302 3621900 system_pods.go:89] "registry-proxy-pww6s" [f217c09b-8dba-4647-baa8-07ab108407df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 10:39:53.006326 3621900 system_pods.go:89] "registry-v9cxf" [03de58e9-3f67-44c3-8965-868a902feada] Running
	I0731 10:39:53.006351 3621900 system_pods.go:89] "snapshot-controller-75bbb956b9-4k4x7" [56d4705d-3345-4bd1-b2ac-94807ddc01c5] Running
	I0731 10:39:53.006383 3621900 system_pods.go:89] "snapshot-controller-75bbb956b9-w2gcc" [d04bfd3f-b2f4-41d6-a27a-2abc090781c5] Running
	I0731 10:39:53.006409 3621900 system_pods.go:89] "storage-provisioner" [933cd5c9-033f-4be8-9150-57b2b4a02119] Running
	I0731 10:39:53.006434 3621900 system_pods.go:126] duration metric: took 206.273874ms to wait for k8s-apps to be running ...
	I0731 10:39:53.006457 3621900 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 10:39:53.006539 3621900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 10:39:53.012505 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:53.023380 3621900 system_svc.go:56] duration metric: took 16.914873ms WaitForService to wait for kubelet.
	I0731 10:39:53.023404 3621900 kubeadm.go:581] duration metric: took 45.466572164s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 10:39:53.023423 3621900 node_conditions.go:102] verifying NodePressure condition ...
	I0731 10:39:53.179477 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:53.198855 3621900 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0731 10:39:53.198887 3621900 node_conditions.go:123] node cpu capacity is 2
	I0731 10:39:53.198901 3621900 node_conditions.go:105] duration metric: took 175.456728ms to run NodePressure ...
	I0731 10:39:53.198912 3621900 start.go:228] waiting for startup goroutines ...
	I0731 10:39:53.437167 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:53.442416 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:39:53.512955 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:53.674723 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:53.936402 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:53.941596 3621900 kapi.go:107] duration metric: took 40.016012793s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 10:39:54.011947 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:54.174262 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:54.436119 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:54.512417 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:54.674266 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:54.936146 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:55.016428 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:55.178951 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:55.435950 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:55.511636 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:55.673455 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:55.936697 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:56.013011 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:56.173613 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:56.435476 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:56.512755 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:56.674011 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:56.936556 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:57.017645 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:57.182561 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:57.436807 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:57.513030 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:57.675260 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:57.937404 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:58.012444 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:58.175292 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:58.436468 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:58.512841 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:58.675188 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:58.936266 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:59.012264 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:59.175557 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:59.436576 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:39:59.512260 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:39:59.674523 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:39:59.936173 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:40:00.024571 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:40:00.176187 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:00.436419 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:40:00.514629 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:40:00.676654 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:00.936177 3621900 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:40:01.012860 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:40:01.178839 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:01.436477 3621900 kapi.go:107] duration metric: took 47.517834528s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 10:40:01.512354 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:40:01.675955 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:02.013197 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:40:02.174623 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:02.511352 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:40:02.674299 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:03.028444 3621900 kapi.go:107] duration metric: took 46.057659682s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 10:40:03.030558 3621900 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-315335 cluster.
	I0731 10:40:03.032774 3621900 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 10:40:03.035027 3621900 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 10:40:03.175004 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:03.673589 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:04.174196 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:04.673979 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:05.175864 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:05.674993 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:06.175205 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:06.674357 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:07.174335 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:07.674711 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:08.175485 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:08.680542 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:09.173587 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:09.674777 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:10.173905 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:10.673768 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:11.174584 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:11.674855 3621900 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:40:12.176055 3621900 kapi.go:107] duration metric: took 57.084185715s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 10:40:12.178467 3621900 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, inspektor-gadget, default-storageclass, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0731 10:40:12.180295 3621900 addons.go:502] enable addons completed in 1m4.859124718s: enabled=[cloud-spanner storage-provisioner ingress-dns inspektor-gadget default-storageclass metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0731 10:40:12.180347 3621900 start.go:233] waiting for cluster config update ...
	I0731 10:40:12.180365 3621900 start.go:242] writing updated cluster config ...
	I0731 10:40:12.180662 3621900 ssh_runner.go:195] Run: rm -f paused
	I0731 10:40:12.364800 3621900 start.go:596] kubectl: 1.27.4, cluster: 1.27.3 (minor skew: 0)
	I0731 10:40:12.367276 3621900 out.go:177] * Done! kubectl is now configured to use "addons-315335" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d295e7c45acdd       13753a81eccfd       10 seconds ago       Exited              hello-world-app           2                   86eeb99ce477e       hello-world-app-65bdb79f98-n66t7
	56c4af7398daf       66bf2c914bf4d       32 seconds ago       Running             nginx                     0                   af46817b79d38       nginx
	75e68b725f2c3       e52b21e9e4589       59 seconds ago       Running             headlamp                  0                   54d4151b73d02       headlamp-66f6498c69-9vz65
	9862baddc75ca       2a5f29343eb03       About a minute ago   Running             gcp-auth                  0                   dc2457f737c6e       gcp-auth-58478865f7-qtkg6
	2c804b998202b       b26fbddce4d07       About a minute ago   Exited              controller                0                   29402da966a2b       ingress-nginx-controller-7799c6795f-ndthh
	2ac3288a525f2       97e04611ad434       About a minute ago   Running             coredns                   0                   91c36ef133d4e       coredns-5d78c9869d-n7rzq
	99fb8812b074f       8f2588812ab29       About a minute ago   Exited              patch                     0                   f0df734788556       ingress-nginx-admission-patch-jsxz2
	21951e66f6433       8f2588812ab29       About a minute ago   Exited              create                    0                   da7198d5390ae       ingress-nginx-admission-create-pz4kv
	9504d9daca906       ba04bb24b9575       2 minutes ago        Running             storage-provisioner       0                   3fe30bce58b80       storage-provisioner
	ae934bc3e71ab       fb73e92641fd5       2 minutes ago        Running             kube-proxy                0                   1da3adf2d47b5       kube-proxy-bgbb9
	c9b1fce13366c       b18bf71b941ba       2 minutes ago        Running             kindnet-cni               0                   fe2232c8e441a       kindnet-wmmnw
	5ca2952b07ede       ab3683b584ae5       2 minutes ago        Running             kube-controller-manager   0                   ae357d14a8425       kube-controller-manager-addons-315335
	18d974c59328a       bcb9e554eaab6       2 minutes ago        Running             kube-scheduler            0                   dfa97fdc171be       kube-scheduler-addons-315335
	566fc0b2d8ee7       39dfb036b0986       2 minutes ago        Running             kube-apiserver            0                   0fa277477dcad       kube-apiserver-addons-315335
	3581adee045a3       24bc64e911039       2 minutes ago        Running             etcd                      0                   888459f1b3204       etcd-addons-315335
	
	* 
	* ==> containerd <==
	* Jul 31 10:41:12 addons-315335 containerd[741]: time="2023-07-31T10:41:12.554235951Z" level=warning msg="cleaning up after shim disconnected" id=2811fe7aa816398e5f079b3848137ea04f129f2eb765d5270de98de34b43cc97 namespace=k8s.io
	Jul 31 10:41:12 addons-315335 containerd[741]: time="2023-07-31T10:41:12.554335192Z" level=info msg="cleaning up dead shim"
	Jul 31 10:41:12 addons-315335 containerd[741]: time="2023-07-31T10:41:12.565774189Z" level=warning msg="cleanup warnings time=\"2023-07-31T10:41:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9916 runtime=io.containerd.runc.v2\n"
	Jul 31 10:41:12 addons-315335 containerd[741]: time="2023-07-31T10:41:12.566272338Z" level=info msg="TearDown network for sandbox \"2811fe7aa816398e5f079b3848137ea04f129f2eb765d5270de98de34b43cc97\" successfully"
	Jul 31 10:41:12 addons-315335 containerd[741]: time="2023-07-31T10:41:12.566384363Z" level=info msg="StopPodSandbox for \"2811fe7aa816398e5f079b3848137ea04f129f2eb765d5270de98de34b43cc97\" returns successfully"
	Jul 31 10:41:13 addons-315335 containerd[741]: time="2023-07-31T10:41:13.381524240Z" level=info msg="RemoveContainer for \"2a16fb016951d024884f344775441937a6266a3c85ca2243e17b17ac869dc7af\""
	Jul 31 10:41:13 addons-315335 containerd[741]: time="2023-07-31T10:41:13.386906586Z" level=info msg="RemoveContainer for \"2a16fb016951d024884f344775441937a6266a3c85ca2243e17b17ac869dc7af\" returns successfully"
	Jul 31 10:41:13 addons-315335 containerd[741]: time="2023-07-31T10:41:13.390096557Z" level=info msg="RemoveContainer for \"6dbf94ffcffd487222f930bd34f67b9f671d0b962366e382a17af7d1dc27d1c6\""
	Jul 31 10:41:13 addons-315335 containerd[741]: time="2023-07-31T10:41:13.397295279Z" level=info msg="RemoveContainer for \"6dbf94ffcffd487222f930bd34f67b9f671d0b962366e382a17af7d1dc27d1c6\" returns successfully"
	Jul 31 10:41:14 addons-315335 containerd[741]: time="2023-07-31T10:41:14.112030717Z" level=info msg="StopContainer for \"2c804b998202b4f74c785f0501a477a181c3f9b28c13996b61017310696c3e57\" with timeout 1 (s)"
	Jul 31 10:41:14 addons-315335 containerd[741]: time="2023-07-31T10:41:14.112469083Z" level=info msg="Stop container \"2c804b998202b4f74c785f0501a477a181c3f9b28c13996b61017310696c3e57\" with signal terminated"
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.138797647Z" level=info msg="Kill container \"2c804b998202b4f74c785f0501a477a181c3f9b28c13996b61017310696c3e57\""
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.256558634Z" level=info msg="shim disconnected" id=2c804b998202b4f74c785f0501a477a181c3f9b28c13996b61017310696c3e57
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.256765419Z" level=warning msg="cleaning up after shim disconnected" id=2c804b998202b4f74c785f0501a477a181c3f9b28c13996b61017310696c3e57 namespace=k8s.io
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.256792562Z" level=info msg="cleaning up dead shim"
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.267636023Z" level=warning msg="cleanup warnings time=\"2023-07-31T10:41:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9993 runtime=io.containerd.runc.v2\n"
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.270315930Z" level=info msg="StopContainer for \"2c804b998202b4f74c785f0501a477a181c3f9b28c13996b61017310696c3e57\" returns successfully"
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.271025854Z" level=info msg="StopPodSandbox for \"29402da966a2b2fd7960780b7f0630e39bc4edca0418960ad61b10a2565bbed6\""
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.271096582Z" level=info msg="Container to stop \"2c804b998202b4f74c785f0501a477a181c3f9b28c13996b61017310696c3e57\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.307558006Z" level=info msg="shim disconnected" id=29402da966a2b2fd7960780b7f0630e39bc4edca0418960ad61b10a2565bbed6
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.307624090Z" level=warning msg="cleaning up after shim disconnected" id=29402da966a2b2fd7960780b7f0630e39bc4edca0418960ad61b10a2565bbed6 namespace=k8s.io
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.307638793Z" level=info msg="cleaning up dead shim"
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.320874564Z" level=warning msg="cleanup warnings time=\"2023-07-31T10:41:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10024 runtime=io.containerd.runc.v2\n"
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.404068082Z" level=info msg="TearDown network for sandbox \"29402da966a2b2fd7960780b7f0630e39bc4edca0418960ad61b10a2565bbed6\" successfully"
	Jul 31 10:41:15 addons-315335 containerd[741]: time="2023-07-31T10:41:15.404119979Z" level=info msg="StopPodSandbox for \"29402da966a2b2fd7960780b7f0630e39bc4edca0418960ad61b10a2565bbed6\" returns successfully"
	
	* 
	* ==> coredns [2ac3288a525f2fe9c737ebb55eefd5bdc13464706991c6d12027a0fe29365699] <==
	* [INFO] 10.244.0.12:44251 - 3912 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004462s
	[INFO] 10.244.0.12:44251 - 29977 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001690771s
	[INFO] 10.244.0.12:50801 - 24875 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002234549s
	[INFO] 10.244.0.12:44251 - 475 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002813331s
	[INFO] 10.244.0.12:50801 - 24898 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002882516s
	[INFO] 10.244.0.12:50801 - 26364 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000153066s
	[INFO] 10.244.0.12:44251 - 6351 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038605s
	[INFO] 10.244.0.12:57989 - 43956 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000107274s
	[INFO] 10.244.0.12:48424 - 33629 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067241s
	[INFO] 10.244.0.12:57989 - 57607 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067512s
	[INFO] 10.244.0.12:48424 - 4020 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000050273s
	[INFO] 10.244.0.12:48424 - 12075 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000078187s
	[INFO] 10.244.0.12:57989 - 40039 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056755s
	[INFO] 10.244.0.12:48424 - 38339 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000075437s
	[INFO] 10.244.0.12:57989 - 36102 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000072681s
	[INFO] 10.244.0.12:48424 - 36801 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073682s
	[INFO] 10.244.0.12:57989 - 63376 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042191s
	[INFO] 10.244.0.12:48424 - 22232 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058224s
	[INFO] 10.244.0.12:57989 - 58059 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040739s
	[INFO] 10.244.0.12:48424 - 27148 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.010658526s
	[INFO] 10.244.0.12:57989 - 29406 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.010070842s
	[INFO] 10.244.0.12:48424 - 32487 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004503947s
	[INFO] 10.244.0.12:48424 - 34226 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000123422s
	[INFO] 10.244.0.12:57989 - 2948 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005822247s
	[INFO] 10.244.0.12:57989 - 8011 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00007072s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-315335
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-315335
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a7848ba25aaaad8ebb50e721c0d343e471188fc7
	                    minikube.k8s.io/name=addons-315335
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_31T10_38_55_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-315335
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 10:38:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-315335
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 10:41:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 10:40:57 +0000   Mon, 31 Jul 2023 10:38:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 10:40:57 +0000   Mon, 31 Jul 2023 10:38:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 10:40:57 +0000   Mon, 31 Jul 2023 10:38:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 10:40:57 +0000   Mon, 31 Jul 2023 10:38:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-315335
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 edc62dbe5319401c9ff81d35f12d3c2e
	  System UUID:                010d5c9d-22e3-4ec6-a225-133a9a3a3baa
	  Boot ID:                    db857c45-c57f-400d-ae31-7370edb43af7
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-n66t7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-58478865f7-qtkg6                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  headlamp                    headlamp-66f6498c69-9vz65                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 coredns-5d78c9869d-n7rzq                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m15s
	  kube-system                 etcd-addons-315335                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m29s
	  kube-system                 kindnet-wmmnw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m16s
	  kube-system                 kube-apiserver-addons-315335             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-controller-manager-addons-315335    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-proxy-bgbb9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 kube-scheduler-addons-315335             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m14s                  kube-proxy       
	  Normal  Starting                 2m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node addons-315335 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node addons-315335 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m36s (x7 over 2m36s)  kubelet          Node addons-315335 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m28s                  kubelet          Node addons-315335 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m28s                  kubelet          Node addons-315335 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m28s                  kubelet          Node addons-315335 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m28s                  kubelet          Node addons-315335 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m28s                  kubelet          Node addons-315335 status is now: NodeReady
	  Normal  RegisteredNode           2m16s                  node-controller  Node addons-315335 event: Registered Node addons-315335 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001169] FS-Cache: N-cookie d=00000000a94fc6de{9p.inode} n=00000000b1411e31
	[  +0.001151] FS-Cache: N-key=[8] 'c9445c0100000000'
	[  +0.002978] FS-Cache: Duplicate cookie detected
	[  +0.000809] FS-Cache: O-cookie c=00000084 [p=00000081 fl=226 nc=0 na=1]
	[  +0.001117] FS-Cache: O-cookie d=00000000a94fc6de{9p.inode} n=00000000070210a5
	[  +0.001113] FS-Cache: O-key=[8] 'c9445c0100000000'
	[  +0.000740] FS-Cache: N-cookie c=0000008b [p=00000081 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000a94fc6de{9p.inode} n=00000000be57c668
	[  +0.001039] FS-Cache: N-key=[8] 'c9445c0100000000'
	[  +2.681322] FS-Cache: Duplicate cookie detected
	[  +0.000788] FS-Cache: O-cookie c=00000082 [p=00000081 fl=226 nc=0 na=1]
	[  +0.001016] FS-Cache: O-cookie d=00000000a94fc6de{9p.inode} n=000000000eae1b59
	[  +0.001039] FS-Cache: O-key=[8] 'c8445c0100000000'
	[  +0.000813] FS-Cache: N-cookie c=0000008d [p=00000081 fl=2 nc=0 na=1]
	[  +0.001035] FS-Cache: N-cookie d=00000000a94fc6de{9p.inode} n=00000000b1411e31
	[  +0.001109] FS-Cache: N-key=[8] 'c8445c0100000000'
	[  +0.291238] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000087 [p=00000081 fl=226 nc=0 na=1]
	[  +0.000952] FS-Cache: O-cookie d=00000000a94fc6de{9p.inode} n=0000000058adea3e
	[  +0.001105] FS-Cache: O-key=[8] 'ce445c0100000000'
	[  +0.000750] FS-Cache: N-cookie c=0000008e [p=00000081 fl=2 nc=0 na=1]
	[  +0.000980] FS-Cache: N-cookie d=00000000a94fc6de{9p.inode} n=0000000065a709cb
	[  +0.001063] FS-Cache: N-key=[8] 'ce445c0100000000'
	[Jul31 10:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jul31 10:15] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [3581adee045a3eaff97c20af0dbaa9d9975910514b72c6b9618a6d217d18244c] <==
	* {"level":"info","ts":"2023-07-31T10:38:47.692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-07-31T10:38:47.692Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-07-31T10:38:47.712Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-31T10:38:47.712Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-31T10:38:47.718Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-07-31T10:38:47.718Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-07-31T10:38:47.718Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-31T10:38:48.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-31T10:38:48.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-31T10:38:48.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-07-31T10:38:48.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-07-31T10:38:48.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-31T10:38:48.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-07-31T10:38:48.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-31T10:38:48.678Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-315335 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-31T10:38:48.678Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T10:38:48.680Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-31T10:38:48.680Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T10:38:48.682Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T10:38:48.683Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-07-31T10:38:48.693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-31T10:38:48.693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-31T10:38:48.699Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T10:38:48.708Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T10:38:48.708Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> gcp-auth [9862baddc75ca4cf3a398398a0a3b64a66c7b032947fc227366eb1cf4921fff2] <==
	* 2023/07/31 10:40:02 GCP Auth Webhook started!
	2023/07/31 10:40:19 Ready to marshal response ...
	2023/07/31 10:40:19 Ready to write response ...
	2023/07/31 10:40:19 Ready to marshal response ...
	2023/07/31 10:40:19 Ready to write response ...
	2023/07/31 10:40:19 Ready to marshal response ...
	2023/07/31 10:40:19 Ready to write response ...
	2023/07/31 10:40:22 Ready to marshal response ...
	2023/07/31 10:40:22 Ready to write response ...
	2023/07/31 10:40:34 Ready to marshal response ...
	2023/07/31 10:40:34 Ready to write response ...
	2023/07/31 10:40:47 Ready to marshal response ...
	2023/07/31 10:40:47 Ready to write response ...
	2023/07/31 10:40:51 Ready to marshal response ...
	2023/07/31 10:40:51 Ready to write response ...
	2023/07/31 10:40:56 Ready to marshal response ...
	2023/07/31 10:40:56 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  10:41:22 up 18:23,  0 users,  load average: 1.39, 2.06, 2.73
	Linux addons-315335 5.15.0-1040-aws #45~20.04.1-Ubuntu SMP Tue Jul 11 19:11:12 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [c9b1fce13366c6a81867e8fec3aba376cc37b42e870e5e251be875ca0857f897] <==
	* I0731 10:39:08.140471       1 main.go:146] kindnetd IP family: "ipv4"
	I0731 10:39:08.140484       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0731 10:39:38.460179       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0731 10:39:38.473831       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:39:38.473912       1 main.go:227] handling current node
	I0731 10:39:48.486372       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:39:48.486400       1 main.go:227] handling current node
	I0731 10:39:58.497649       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:39:58.497674       1 main.go:227] handling current node
	I0731 10:40:08.509843       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:40:08.509869       1 main.go:227] handling current node
	I0731 10:40:18.517666       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:40:18.517690       1 main.go:227] handling current node
	I0731 10:40:28.523518       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:40:28.523549       1 main.go:227] handling current node
	I0731 10:40:38.527668       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:40:38.527695       1 main.go:227] handling current node
	I0731 10:40:48.538204       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:40:48.538230       1 main.go:227] handling current node
	I0731 10:40:58.542585       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:40:58.542609       1 main.go:227] handling current node
	I0731 10:41:08.552679       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:41:08.552710       1 main.go:227] handling current node
	I0731 10:41:18.564994       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:41:18.565021       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [566fc0b2d8ee7e41afefe384d1e794ea9b567cf549c7a1c2f50038260aab17cb] <==
	* W0731 10:40:52.967675       1 handler_proxy.go:100] no RequestInfo found in the context
	E0731 10:40:52.967829       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 10:40:52.967844       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 10:40:52.980098       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0731 10:40:56.713874       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.109.44.132]
	I0731 10:41:06.088819       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:41:06.096691       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:41:06.113269       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:41:06.113535       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:41:06.129839       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:41:06.129895       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:41:06.142410       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:41:06.142452       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:41:06.154542       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:41:06.154620       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:41:06.164082       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:41:06.164342       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:41:06.182729       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:41:06.183051       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:41:06.196596       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:41:06.197541       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0731 10:41:07.142615       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0731 10:41:07.197608       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0731 10:41:07.203979       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [5ca2952b07edeb372104468e3756a86938f72df08172391c1366384fb99e7a35] <==
	* E0731 10:41:07.144688       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:41:07.199979       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:41:07.205883       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 10:41:08.166287       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:41:08.166320       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 10:41:08.427799       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:41:08.427834       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 10:41:08.775672       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:41:08.775706       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 10:41:10.599709       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:41:10.599742       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 10:41:10.765553       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:41:10.765586       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 10:41:11.643423       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:41:11.643456       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 10:41:14.084395       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0731 10:41:14.099767       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0731 10:41:16.073701       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:41:16.073749       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 10:41:16.499512       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:41:16.499544       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 10:41:16.698097       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:41:16.698130       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 10:41:17.167851       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:41:17.167882       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [ae934bc3e71abc123943a152182777f0ead9a7b2f47cc7510d023f0ac04cc5d5] <==
	* I0731 10:39:08.328487       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0731 10:39:08.328564       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0731 10:39:08.328583       1 server_others.go:554] "Using iptables proxy"
	I0731 10:39:08.405561       1 server_others.go:192] "Using iptables Proxier"
	I0731 10:39:08.405602       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0731 10:39:08.405611       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0731 10:39:08.405627       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0731 10:39:08.405689       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 10:39:08.406252       1 server.go:658] "Version info" version="v1.27.3"
	I0731 10:39:08.406264       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 10:39:08.407208       1 config.go:188] "Starting service config controller"
	I0731 10:39:08.407276       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0731 10:39:08.407310       1 config.go:97] "Starting endpoint slice config controller"
	I0731 10:39:08.407315       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0731 10:39:08.407839       1 config.go:315] "Starting node config controller"
	I0731 10:39:08.407846       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0731 10:39:08.507536       1 shared_informer.go:318] Caches are synced for service config
	I0731 10:39:08.507610       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0731 10:39:08.514580       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [18d974c59328a66e0e3870e00de783c74ac7151cbae77564fefac1dfd9a69600] <==
	* W0731 10:38:51.436907       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 10:38:51.437127       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 10:38:51.437227       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 10:38:51.437245       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 10:38:51.437327       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 10:38:51.437366       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 10:38:51.437459       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 10:38:51.437477       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 10:38:51.437582       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 10:38:51.437620       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 10:38:51.437693       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 10:38:51.437709       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 10:38:51.437865       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 10:38:51.437887       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 10:38:51.437939       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 10:38:51.437959       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 10:38:51.440152       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 10:38:51.440179       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 10:38:52.315489       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 10:38:52.315529       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 10:38:52.421394       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 10:38:52.421680       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 10:38:52.421836       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 10:38:52.421719       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0731 10:38:52.980263       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 31 10:41:07 addons-315335 kubelet[1346]: I0731 10:41:07.404120    1346 scope.go:115] "RemoveContainer" containerID="28ffdf1d48821801d93116d152e007db036c5225dd7ac3cf44af7d78a7ed1e0d"
	Jul 31 10:41:07 addons-315335 kubelet[1346]: E0731 10:41:07.404631    1346 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28ffdf1d48821801d93116d152e007db036c5225dd7ac3cf44af7d78a7ed1e0d\": not found" containerID="28ffdf1d48821801d93116d152e007db036c5225dd7ac3cf44af7d78a7ed1e0d"
	Jul 31 10:41:07 addons-315335 kubelet[1346]: I0731 10:41:07.404672    1346 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:28ffdf1d48821801d93116d152e007db036c5225dd7ac3cf44af7d78a7ed1e0d} err="failed to get container status \"28ffdf1d48821801d93116d152e007db036c5225dd7ac3cf44af7d78a7ed1e0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"28ffdf1d48821801d93116d152e007db036c5225dd7ac3cf44af7d78a7ed1e0d\": not found"
	Jul 31 10:41:08 addons-315335 kubelet[1346]: I0731 10:41:08.297410    1346 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=56d4705d-3345-4bd1-b2ac-94807ddc01c5 path="/var/lib/kubelet/pods/56d4705d-3345-4bd1-b2ac-94807ddc01c5/volumes"
	Jul 31 10:41:08 addons-315335 kubelet[1346]: I0731 10:41:08.297796    1346 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d04bfd3f-b2f4-41d6-a27a-2abc090781c5 path="/var/lib/kubelet/pods/d04bfd3f-b2f4-41d6-a27a-2abc090781c5/volumes"
	Jul 31 10:41:12 addons-315335 kubelet[1346]: I0731 10:41:12.296229    1346 scope.go:115] "RemoveContainer" containerID="6dbf94ffcffd487222f930bd34f67b9f671d0b962366e382a17af7d1dc27d1c6"
	Jul 31 10:41:12 addons-315335 kubelet[1346]: I0731 10:41:12.637788    1346 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-855kr\" (UniqueName: \"kubernetes.io/projected/84076644-76d1-498a-be93-d967c34530cd-kube-api-access-855kr\") pod \"84076644-76d1-498a-be93-d967c34530cd\" (UID: \"84076644-76d1-498a-be93-d967c34530cd\") "
	Jul 31 10:41:12 addons-315335 kubelet[1346]: I0731 10:41:12.639986    1346 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84076644-76d1-498a-be93-d967c34530cd-kube-api-access-855kr" (OuterVolumeSpecName: "kube-api-access-855kr") pod "84076644-76d1-498a-be93-d967c34530cd" (UID: "84076644-76d1-498a-be93-d967c34530cd"). InnerVolumeSpecName "kube-api-access-855kr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 10:41:12 addons-315335 kubelet[1346]: I0731 10:41:12.738525    1346 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-855kr\" (UniqueName: \"kubernetes.io/projected/84076644-76d1-498a-be93-d967c34530cd-kube-api-access-855kr\") on node \"addons-315335\" DevicePath \"\""
	Jul 31 10:41:13 addons-315335 kubelet[1346]: I0731 10:41:13.376815    1346 scope.go:115] "RemoveContainer" containerID="2a16fb016951d024884f344775441937a6266a3c85ca2243e17b17ac869dc7af"
	Jul 31 10:41:13 addons-315335 kubelet[1346]: I0731 10:41:13.383460    1346 scope.go:115] "RemoveContainer" containerID="d295e7c45acdded482e7f5dce02af5f7ef654d060c91d4093d981b5190ac81fb"
	Jul 31 10:41:13 addons-315335 kubelet[1346]: E0731 10:41:13.384010    1346 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-n66t7_default(ca58c2d1-d15c-4ec0-9ca2-a28ab348dc81)\"" pod="default/hello-world-app-65bdb79f98-n66t7" podUID=ca58c2d1-d15c-4ec0-9ca2-a28ab348dc81
	Jul 31 10:41:13 addons-315335 kubelet[1346]: I0731 10:41:13.387215    1346 scope.go:115] "RemoveContainer" containerID="6dbf94ffcffd487222f930bd34f67b9f671d0b962366e382a17af7d1dc27d1c6"
	Jul 31 10:41:14 addons-315335 kubelet[1346]: E0731 10:41:14.121528    1346 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-ndthh.1776edb58b9e0b33", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-ndthh", UID:"6e7923a6-b865-46e6-8772-b6d945e804e3", APIVersion:"v1", ResourceVersion:"651", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-315335"}, FirstTimestamp:time.Date(2023, time.July, 31, 10, 41, 14, 111454003, time.Local), LastTimestamp:time.Date(2023, time.July, 31, 10, 41, 14, 111454003, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-ndthh.1776edb58b9e0b33" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 31 10:41:14 addons-315335 kubelet[1346]: I0731 10:41:14.296594    1346 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=3b0ded40-5b1f-4a57-a096-c4690e59b98a path="/var/lib/kubelet/pods/3b0ded40-5b1f-4a57-a096-c4690e59b98a/volumes"
	Jul 31 10:41:14 addons-315335 kubelet[1346]: I0731 10:41:14.297341    1346 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=84076644-76d1-498a-be93-d967c34530cd path="/var/lib/kubelet/pods/84076644-76d1-498a-be93-d967c34530cd/volumes"
	Jul 31 10:41:14 addons-315335 kubelet[1346]: I0731 10:41:14.297881    1346 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=b8e847af-0e5f-4c26-a824-a05dcbd61487 path="/var/lib/kubelet/pods/b8e847af-0e5f-4c26-a824-a05dcbd61487/volumes"
	Jul 31 10:41:15 addons-315335 kubelet[1346]: I0731 10:41:15.391398    1346 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29402da966a2b2fd7960780b7f0630e39bc4edca0418960ad61b10a2565bbed6"
	Jul 31 10:41:15 addons-315335 kubelet[1346]: I0731 10:41:15.454450    1346 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ljq2\" (UniqueName: \"kubernetes.io/projected/6e7923a6-b865-46e6-8772-b6d945e804e3-kube-api-access-7ljq2\") pod \"6e7923a6-b865-46e6-8772-b6d945e804e3\" (UID: \"6e7923a6-b865-46e6-8772-b6d945e804e3\") "
	Jul 31 10:41:15 addons-315335 kubelet[1346]: I0731 10:41:15.454503    1346 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e7923a6-b865-46e6-8772-b6d945e804e3-webhook-cert\") pod \"6e7923a6-b865-46e6-8772-b6d945e804e3\" (UID: \"6e7923a6-b865-46e6-8772-b6d945e804e3\") "
	Jul 31 10:41:15 addons-315335 kubelet[1346]: I0731 10:41:15.456696    1346 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e7923a6-b865-46e6-8772-b6d945e804e3-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6e7923a6-b865-46e6-8772-b6d945e804e3" (UID: "6e7923a6-b865-46e6-8772-b6d945e804e3"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 10:41:15 addons-315335 kubelet[1346]: I0731 10:41:15.457794    1346 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7923a6-b865-46e6-8772-b6d945e804e3-kube-api-access-7ljq2" (OuterVolumeSpecName: "kube-api-access-7ljq2") pod "6e7923a6-b865-46e6-8772-b6d945e804e3" (UID: "6e7923a6-b865-46e6-8772-b6d945e804e3"). InnerVolumeSpecName "kube-api-access-7ljq2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 10:41:15 addons-315335 kubelet[1346]: I0731 10:41:15.555494    1346 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7ljq2\" (UniqueName: \"kubernetes.io/projected/6e7923a6-b865-46e6-8772-b6d945e804e3-kube-api-access-7ljq2\") on node \"addons-315335\" DevicePath \"\""
	Jul 31 10:41:15 addons-315335 kubelet[1346]: I0731 10:41:15.555537    1346 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e7923a6-b865-46e6-8772-b6d945e804e3-webhook-cert\") on node \"addons-315335\" DevicePath \"\""
	Jul 31 10:41:16 addons-315335 kubelet[1346]: I0731 10:41:16.297390    1346 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=6e7923a6-b865-46e6-8772-b6d945e804e3 path="/var/lib/kubelet/pods/6e7923a6-b865-46e6-8772-b6d945e804e3/volumes"
	
	* 
	* ==> storage-provisioner [9504d9daca906a4c40bca19f8cb97f4411b210baa038a8fcef983926a111e948] <==
	* I0731 10:39:12.476317       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 10:39:12.499724       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 10:39:12.499829       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 10:39:12.517120       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 10:39:12.519324       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-315335_ae290be8-5809-48d8-b855-d5aaf480fee0!
	I0731 10:39:12.529592       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"67d085c0-d5d9-4e50-ba50-ac3487776c73", APIVersion:"v1", ResourceVersion:"584", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-315335_ae290be8-5809-48d8-b855-d5aaf480fee0 became leader
	I0731 10:39:12.620358       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-315335_ae290be8-5809-48d8-b855-d5aaf480fee0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-315335 -n addons-315335
helpers_test.go:261: (dbg) Run:  kubectl --context addons-315335 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (36.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image load --daemon gcr.io/google-containers/addon-resizer:functional-302253 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-302253 image load --daemon gcr.io/google-containers/addon-resizer:functional-302253 --alsologtostderr: (3.636698577s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-302253" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image load --daemon gcr.io/google-containers/addon-resizer:functional-302253 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-302253 image load --daemon gcr.io/google-containers/addon-resizer:functional-302253 --alsologtostderr: (3.273424183s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-302253" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.717421848s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-302253
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image load --daemon gcr.io/google-containers/addon-resizer:functional-302253 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-302253 image load --daemon gcr.io/google-containers/addon-resizer:functional-302253 --alsologtostderr: (3.395037914s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-302253" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image save gcr.io/google-containers/addon-resizer:functional-302253 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0731 10:46:12.383889 3649472 out.go:296] Setting OutFile to fd 1 ...
	I0731 10:46:12.385056 3649472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:46:12.385093 3649472 out.go:309] Setting ErrFile to fd 2...
	I0731 10:46:12.385142 3649472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:46:12.385462 3649472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
	I0731 10:46:12.386248 3649472 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0731 10:46:12.386420 3649472 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0731 10:46:12.386906 3649472 cli_runner.go:164] Run: docker container inspect functional-302253 --format={{.State.Status}}
	I0731 10:46:12.444840 3649472 ssh_runner.go:195] Run: systemctl --version
	I0731 10:46:12.444893 3649472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-302253
	I0731 10:46:12.481276 3649472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35353 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/functional-302253/id_rsa Username:docker}
	I0731 10:46:12.610295 3649472 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0731 10:46:12.610379 3649472 cache_images.go:254] Failed to load cached images for profile functional-302253. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0731 10:46:12.610400 3649472 cache_images.go:262] succeeded pushing to: 
	I0731 10:46:12.610404 3649472 cache_images.go:263] failed pushing to: functional-302253

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-302253 /tmp/TestFunctionalparallelMountCmdspecific-port3889493696/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (535.05856ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (349.49762ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (292.848881ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (350.169135ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (366.945498ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2023/07/31 10:46:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (376.790315ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (407.41844ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:253: /mount-9p did not appear within 12.197951955s: exit status 1
functional_test_mount_test.go:220: "TestFunctional/parallel/MountCmd/specific-port" failed, getting debug info...
functional_test_mount_test.go:221: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:221: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (357.432259ms)

                                                
                                                
-- stdout --
	total 8
	drwxr-xr-x 2 root root 4096 Jul 31 10:46 .
	drwxr-xr-x 1 root root 4096 Jul 31 10:46 ..
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:223: debugging command "out/minikube-linux-arm64 -p functional-302253 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh "sudo umount -f /mount-9p": exit status 1 (336.766439ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-302253 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-302253 /tmp/TestFunctionalparallelMountCmdspecific-port3889493696/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-arm64 mount -p functional-302253 /tmp/TestFunctionalparallelMountCmdspecific-port3889493696/001:/mount-9p --alsologtostderr -v=1 --port 46464] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-arm64 mount -p functional-302253 /tmp/TestFunctionalparallelMountCmdspecific-port3889493696/001:/mount-9p --alsologtostderr -v=1 --port 46464] stderr:
I0731 10:46:38.429329 3651514 out.go:296] Setting OutFile to fd 1 ...
I0731 10:46:38.429523 3651514 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 10:46:38.429534 3651514 out.go:309] Setting ErrFile to fd 2...
I0731 10:46:38.429541 3651514 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 10:46:38.429895 3651514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
I0731 10:46:38.430622 3651514 mustload.go:65] Loading cluster: functional-302253
I0731 10:46:38.431587 3651514 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0731 10:46:38.433440 3651514 cli_runner.go:164] Run: docker container inspect functional-302253 --format={{.State.Status}}
I0731 10:46:38.458852 3651514 host.go:66] Checking if "functional-302253" exists ...
I0731 10:46:38.459903 3651514 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0731 10:46:38.581439 3651514 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:48 SystemTime:2023-07-31 10:46:38.570983262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
I0731 10:46:38.581582 3651514 cli_runner.go:164] Run: docker network inspect functional-302253 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0731 10:46:38.673020 3651514 out.go:177] 
W0731 10:46:38.674712 3651514 out.go:239] X Exiting due to IF_MOUNT_PORT: Error finding port for mount: Error accessing port 46464
X Exiting due to IF_MOUNT_PORT: Error finding port for mount: Error accessing port 46464
W0731 10:46:38.674732 3651514 out.go:239] * 
* 
W0731 10:46:38.705961 3651514 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_773a7ac181ac410b42fd1412dcb585a9bc33eb08_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_773a7ac181ac410b42fd1412dcb585a9bc33eb08_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 10:46:38.709927 3651514 out.go:177] 
--- FAIL: TestFunctional/parallel/MountCmd/specific-port (13.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (55.62s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-947999 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-947999 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.243891655s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-947999 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-947999 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a574b70a-f7af-462b-aaf4-b93e49fb1cbf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a574b70a-f7af-462b-aaf4-b93e49fb1cbf] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.019199469s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-947999 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-947999 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-947999 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.021021281s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-947999 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-947999 addons disable ingress-dns --alsologtostderr -v=1: (8.227093363s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-947999 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-947999 addons disable ingress --alsologtostderr -v=1: (7.528181434s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-947999
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-947999:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "495b5ccaa57fd31f2dbf5576ac4394077b74d78178d8eb6d6ce510923d6d5e6e",
	        "Created": "2023-07-31T10:47:11.336567688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3654093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T10:47:11.636224244Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/495b5ccaa57fd31f2dbf5576ac4394077b74d78178d8eb6d6ce510923d6d5e6e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/495b5ccaa57fd31f2dbf5576ac4394077b74d78178d8eb6d6ce510923d6d5e6e/hostname",
	        "HostsPath": "/var/lib/docker/containers/495b5ccaa57fd31f2dbf5576ac4394077b74d78178d8eb6d6ce510923d6d5e6e/hosts",
	        "LogPath": "/var/lib/docker/containers/495b5ccaa57fd31f2dbf5576ac4394077b74d78178d8eb6d6ce510923d6d5e6e/495b5ccaa57fd31f2dbf5576ac4394077b74d78178d8eb6d6ce510923d6d5e6e-json.log",
	        "Name": "/ingress-addon-legacy-947999",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-947999:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-947999",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6e4c85c28fb5cd01428f3651773dbb34589f32e93846e2f6966d87e6b9ad82b7-init/diff:/var/lib/docker/overlay2/f6e468e16ca02ac051c3ef69ec9d67702b3bb9f63235ab1123ef1010168b87cf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e4c85c28fb5cd01428f3651773dbb34589f32e93846e2f6966d87e6b9ad82b7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e4c85c28fb5cd01428f3651773dbb34589f32e93846e2f6966d87e6b9ad82b7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e4c85c28fb5cd01428f3651773dbb34589f32e93846e2f6966d87e6b9ad82b7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-947999",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-947999/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-947999",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-947999",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-947999",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6d89d9dab949838393ef3d0c601fa1eedd6951638e7d827e22c915ab741653b0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35358"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35357"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35354"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35356"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35355"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6d89d9dab949",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-947999": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "495b5ccaa57f",
	                        "ingress-addon-legacy-947999"
	                    ],
	                    "NetworkID": "801e7765f31a7655cc7c47636b9dd4b7475fc0a175fdd48e462a0c4a8d14275b",
	                    "EndpointID": "238300fe7a75696b4dd64241183ea637b27a5ecb32064440e0ef8c6a0fdc30a8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-947999 -n ingress-addon-legacy-947999
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-947999 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-947999 logs -n 25: (1.344369352s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-302253                                                      | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC | 31 Jul 23 10:46 UTC |
	|         | image ls --format short                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-302253                                                      | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC | 31 Jul 23 10:46 UTC |
	|         | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh     | functional-302253 ssh pgrep                                            | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC |                     |
	|         | buildkitd                                                              |                             |         |         |                     |                     |
	| image   | functional-302253 image build -t                                       | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC | 31 Jul 23 10:46 UTC |
	|         | localhost/my-image:functional-302253                                   |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| ssh     | functional-302253 ssh findmnt                                          | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                 |                             |         |         |                     |                     |
	| ssh     | functional-302253 ssh mount |                                          | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC |                     |
	|         | grep 9p; ls -la /mount-9p; cat                                         |                             |         |         |                     |                     |
	|         | /mount-9p/pod-dates                                                    |                             |         |         |                     |                     |
	| ssh     | functional-302253 ssh sudo                                             | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC |                     |
	|         | umount -f /mount-9p                                                    |                             |         |         |                     |                     |
	| image   | functional-302253 image ls                                             | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC | 31 Jul 23 10:46 UTC |
	| mount   | -p functional-302253                                                   | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2644373783/001:/mount3 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh     | functional-302253 ssh findmnt                                          | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC | 31 Jul 23 10:46 UTC |
	|         | -T /mount1                                                             |                             |         |         |                     |                     |
	| mount   | -p functional-302253                                                   | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2644373783/001:/mount1 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount   | -p functional-302253                                                   | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2644373783/001:/mount2 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| image   | functional-302253                                                      | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC | 31 Jul 23 10:46 UTC |
	|         | image ls --format json                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh     | functional-302253 ssh findmnt                                          | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC | 31 Jul 23 10:46 UTC |
	|         | -T /mount2                                                             |                             |         |         |                     |                     |
	| image   | functional-302253                                                      | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC | 31 Jul 23 10:46 UTC |
	|         | image ls --format table                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh     | functional-302253 ssh findmnt                                          | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC | 31 Jul 23 10:46 UTC |
	|         | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount   | -p functional-302253                                                   | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC |                     |
	|         | --kill=true                                                            |                             |         |         |                     |                     |
	| delete  | -p functional-302253                                                   | functional-302253           | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC | 31 Jul 23 10:46 UTC |
	| start   | -p ingress-addon-legacy-947999                                         | ingress-addon-legacy-947999 | jenkins | v1.31.1 | 31 Jul 23 10:46 UTC | 31 Jul 23 10:48 UTC |
	|         | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|         | --container-runtime=containerd                                         |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-947999                                            | ingress-addon-legacy-947999 | jenkins | v1.31.1 | 31 Jul 23 10:48 UTC | 31 Jul 23 10:48 UTC |
	|         | addons enable ingress                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-947999                                            | ingress-addon-legacy-947999 | jenkins | v1.31.1 | 31 Jul 23 10:48 UTC | 31 Jul 23 10:48 UTC |
	|         | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-947999                                            | ingress-addon-legacy-947999 | jenkins | v1.31.1 | 31 Jul 23 10:48 UTC | 31 Jul 23 10:48 UTC |
	|         | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-947999 ip                                         | ingress-addon-legacy-947999 | jenkins | v1.31.1 | 31 Jul 23 10:48 UTC | 31 Jul 23 10:48 UTC |
	| addons  | ingress-addon-legacy-947999                                            | ingress-addon-legacy-947999 | jenkins | v1.31.1 | 31 Jul 23 10:49 UTC | 31 Jul 23 10:49 UTC |
	|         | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-947999                                            | ingress-addon-legacy-947999 | jenkins | v1.31.1 | 31 Jul 23 10:49 UTC | 31 Jul 23 10:49 UTC |
	|         | addons disable ingress                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 10:46:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 10:46:55.782484 3653627 out.go:296] Setting OutFile to fd 1 ...
	I0731 10:46:55.782692 3653627 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:46:55.782719 3653627 out.go:309] Setting ErrFile to fd 2...
	I0731 10:46:55.782740 3653627 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:46:55.783049 3653627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
	I0731 10:46:55.783490 3653627 out.go:303] Setting JSON to false
	I0731 10:46:55.784513 3653627 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":66563,"bootTime":1690733853,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 10:46:55.784600 3653627 start.go:138] virtualization:  
	I0731 10:46:55.787171 3653627 out.go:177] * [ingress-addon-legacy-947999] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 10:46:55.789343 3653627 out.go:177]   - MINIKUBE_LOCATION=16969
	I0731 10:46:55.791218 3653627 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:46:55.789489 3653627 notify.go:220] Checking for updates...
	I0731 10:46:55.793527 3653627 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	I0731 10:46:55.795431 3653627 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	I0731 10:46:55.797287 3653627 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 10:46:55.799586 3653627 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:46:55.801663 3653627 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 10:46:55.825645 3653627 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 10:46:55.825747 3653627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:46:55.913080 3653627 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-31 10:46:55.903140798 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 10:46:55.913212 3653627 docker.go:294] overlay module found
	I0731 10:46:55.915496 3653627 out.go:177] * Using the docker driver based on user configuration
	I0731 10:46:55.917421 3653627 start.go:298] selected driver: docker
	I0731 10:46:55.917440 3653627 start.go:898] validating driver "docker" against <nil>
	I0731 10:46:55.917453 3653627 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:46:55.918071 3653627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:46:55.982778 3653627 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-31 10:46:55.973787695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 10:46:55.982941 3653627 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 10:46:55.983158 3653627 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:46:55.985214 3653627 out.go:177] * Using Docker driver with root privileges
	I0731 10:46:55.987177 3653627 cni.go:84] Creating CNI manager for ""
	I0731 10:46:55.987193 3653627 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0731 10:46:55.987204 3653627 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 10:46:55.987219 3653627 start_flags.go:319] config:
	{Name:ingress-addon-legacy-947999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-947999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 10:46:55.989156 3653627 out.go:177] * Starting control plane node ingress-addon-legacy-947999 in cluster ingress-addon-legacy-947999
	I0731 10:46:55.991508 3653627 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0731 10:46:55.993355 3653627 out.go:177] * Pulling base image ...
	I0731 10:46:55.995223 3653627 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0731 10:46:55.995314 3653627 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 10:46:56.012565 3653627 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0731 10:46:56.012586 3653627 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0731 10:46:56.071787 3653627 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0731 10:46:56.071810 3653627 cache.go:57] Caching tarball of preloaded images
	I0731 10:46:56.071963 3653627 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0731 10:46:56.074124 3653627 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0731 10:46:56.075865 3653627 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0731 10:46:56.194007 3653627 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0731 10:47:03.647853 3653627 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0731 10:47:03.647983 3653627 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0731 10:47:04.752588 3653627 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I0731 10:47:04.752950 3653627 profile.go:148] Saving config to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/config.json ...
	I0731 10:47:04.752984 3653627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/config.json: {Name:mke948fae1f80b49e1261719602f4c1065c88bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:47:04.753180 3653627 cache.go:195] Successfully downloaded all kic artifacts
	I0731 10:47:04.753226 3653627 start.go:365] acquiring machines lock for ingress-addon-legacy-947999: {Name:mk59d9eb2a3f6b20f477a3ee602d229703263654 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:47:04.753288 3653627 start.go:369] acquired machines lock for "ingress-addon-legacy-947999" in 47.18µs
	I0731 10:47:04.753310 3653627 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-947999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-947999 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0731 10:47:04.753400 3653627 start.go:125] createHost starting for "" (driver="docker")
	I0731 10:47:04.755960 3653627 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0731 10:47:04.756169 3653627 start.go:159] libmachine.API.Create for "ingress-addon-legacy-947999" (driver="docker")
	I0731 10:47:04.756195 3653627 client.go:168] LocalClient.Create starting
	I0731 10:47:04.756308 3653627 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem
	I0731 10:47:04.756343 3653627 main.go:141] libmachine: Decoding PEM data...
	I0731 10:47:04.756362 3653627 main.go:141] libmachine: Parsing certificate...
	I0731 10:47:04.756419 3653627 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem
	I0731 10:47:04.756439 3653627 main.go:141] libmachine: Decoding PEM data...
	I0731 10:47:04.756453 3653627 main.go:141] libmachine: Parsing certificate...
	I0731 10:47:04.756784 3653627 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-947999 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 10:47:04.773888 3653627 cli_runner.go:211] docker network inspect ingress-addon-legacy-947999 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 10:47:04.773975 3653627 network_create.go:281] running [docker network inspect ingress-addon-legacy-947999] to gather additional debugging logs...
	I0731 10:47:04.773996 3653627 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-947999
	W0731 10:47:04.790512 3653627 cli_runner.go:211] docker network inspect ingress-addon-legacy-947999 returned with exit code 1
	I0731 10:47:04.790551 3653627 network_create.go:284] error running [docker network inspect ingress-addon-legacy-947999]: docker network inspect ingress-addon-legacy-947999: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-947999 not found
	I0731 10:47:04.790565 3653627 network_create.go:286] output of [docker network inspect ingress-addon-legacy-947999]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-947999 not found
	
	** /stderr **
	I0731 10:47:04.790628 3653627 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 10:47:04.807746 3653627 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001074a90}
	I0731 10:47:04.807784 3653627 network_create.go:123] attempt to create docker network ingress-addon-legacy-947999 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0731 10:47:04.807842 3653627 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-947999 ingress-addon-legacy-947999
	I0731 10:47:04.877628 3653627 network_create.go:107] docker network ingress-addon-legacy-947999 192.168.49.0/24 created
	I0731 10:47:04.877661 3653627 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-947999" container
	I0731 10:47:04.877739 3653627 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 10:47:04.892931 3653627 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-947999 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-947999 --label created_by.minikube.sigs.k8s.io=true
	I0731 10:47:04.909892 3653627 oci.go:103] Successfully created a docker volume ingress-addon-legacy-947999
	I0731 10:47:04.909975 3653627 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-947999-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-947999 --entrypoint /usr/bin/test -v ingress-addon-legacy-947999:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0731 10:47:06.410344 3653627 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-947999-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-947999 --entrypoint /usr/bin/test -v ingress-addon-legacy-947999:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.500324499s)
	I0731 10:47:06.410374 3653627 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-947999
	I0731 10:47:06.410402 3653627 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0731 10:47:06.410420 3653627 kic.go:190] Starting extracting preloaded images to volume ...
	I0731 10:47:06.410502 3653627 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-947999:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 10:47:11.250842 3653627 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-947999:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.840296963s)
	I0731 10:47:11.250872 3653627 kic.go:199] duration metric: took 4.840449 seconds to extract preloaded images to volume
	W0731 10:47:11.251024 3653627 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 10:47:11.251133 3653627 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 10:47:11.321016 3653627 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-947999 --name ingress-addon-legacy-947999 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-947999 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-947999 --network ingress-addon-legacy-947999 --ip 192.168.49.2 --volume ingress-addon-legacy-947999:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0731 10:47:11.644299 3653627 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-947999 --format={{.State.Running}}
	I0731 10:47:11.670399 3653627 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-947999 --format={{.State.Status}}
	I0731 10:47:11.693251 3653627 cli_runner.go:164] Run: docker exec ingress-addon-legacy-947999 stat /var/lib/dpkg/alternatives/iptables
	I0731 10:47:11.780557 3653627 oci.go:144] the created container "ingress-addon-legacy-947999" has a running status.
	I0731 10:47:11.780581 3653627 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/ingress-addon-legacy-947999/id_rsa...
	I0731 10:47:12.310311 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/ingress-addon-legacy-947999/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0731 10:47:12.310370 3653627 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/ingress-addon-legacy-947999/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 10:47:12.343135 3653627 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-947999 --format={{.State.Status}}
	I0731 10:47:12.374381 3653627 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 10:47:12.374399 3653627 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-947999 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 10:47:12.453634 3653627 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-947999 --format={{.State.Status}}
	I0731 10:47:12.475199 3653627 machine.go:88] provisioning docker machine ...
	I0731 10:47:12.475229 3653627 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-947999"
	I0731 10:47:12.475294 3653627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-947999
	I0731 10:47:12.494230 3653627 main.go:141] libmachine: Using SSH client type: native
	I0731 10:47:12.494694 3653627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35358 <nil> <nil>}
	I0731 10:47:12.494715 3653627 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-947999 && echo "ingress-addon-legacy-947999" | sudo tee /etc/hostname
	I0731 10:47:12.684404 3653627 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-947999
	
	I0731 10:47:12.684483 3653627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-947999
	I0731 10:47:12.714489 3653627 main.go:141] libmachine: Using SSH client type: native
	I0731 10:47:12.714925 3653627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35358 <nil> <nil>}
	I0731 10:47:12.714950 3653627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-947999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-947999/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-947999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 10:47:12.854197 3653627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 10:47:12.854277 3653627 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16969-3616075/.minikube CaCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16969-3616075/.minikube}
	I0731 10:47:12.854332 3653627 ubuntu.go:177] setting up certificates
	I0731 10:47:12.854357 3653627 provision.go:83] configureAuth start
	I0731 10:47:12.854444 3653627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-947999
	I0731 10:47:12.876977 3653627 provision.go:138] copyHostCerts
	I0731 10:47:12.877014 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem
	I0731 10:47:12.877044 3653627 exec_runner.go:144] found /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem, removing ...
	I0731 10:47:12.877059 3653627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem
	I0731 10:47:12.877178 3653627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem (1082 bytes)
	I0731 10:47:12.877320 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem
	I0731 10:47:12.877343 3653627 exec_runner.go:144] found /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem, removing ...
	I0731 10:47:12.877347 3653627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem
	I0731 10:47:12.877382 3653627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem (1123 bytes)
	I0731 10:47:12.877428 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem
	I0731 10:47:12.877448 3653627 exec_runner.go:144] found /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem, removing ...
	I0731 10:47:12.877456 3653627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem
	I0731 10:47:12.877483 3653627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem (1679 bytes)
	I0731 10:47:12.877529 3653627 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-947999 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-947999]
	I0731 10:47:13.755123 3653627 provision.go:172] copyRemoteCerts
	I0731 10:47:13.755212 3653627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 10:47:13.755265 3653627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-947999
	I0731 10:47:13.771850 3653627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35358 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/ingress-addon-legacy-947999/id_rsa Username:docker}
	I0731 10:47:13.867303 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 10:47:13.867364 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0731 10:47:13.893861 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 10:47:13.893918 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 10:47:13.921125 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 10:47:13.921185 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 10:47:13.947997 3653627 provision.go:86] duration metric: configureAuth took 1.093612906s
	I0731 10:47:13.948062 3653627 ubuntu.go:193] setting minikube options for container-runtime
	I0731 10:47:13.948290 3653627 config.go:182] Loaded profile config "ingress-addon-legacy-947999": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0731 10:47:13.948306 3653627 machine.go:91] provisioned docker machine in 1.473089052s
	I0731 10:47:13.948313 3653627 client.go:171] LocalClient.Create took 9.192113436s
	I0731 10:47:13.948333 3653627 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-947999" took 9.192161845s
	I0731 10:47:13.948349 3653627 start.go:300] post-start starting for "ingress-addon-legacy-947999" (driver="docker")
	I0731 10:47:13.948358 3653627 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 10:47:13.948438 3653627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 10:47:13.948513 3653627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-947999
	I0731 10:47:13.965542 3653627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35358 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/ingress-addon-legacy-947999/id_rsa Username:docker}
	I0731 10:47:14.060072 3653627 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 10:47:14.065622 3653627 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 10:47:14.065661 3653627 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 10:47:14.065679 3653627 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 10:47:14.065685 3653627 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0731 10:47:14.065701 3653627 filesync.go:126] Scanning /home/jenkins/minikube-integration/16969-3616075/.minikube/addons for local assets ...
	I0731 10:47:14.065769 3653627 filesync.go:126] Scanning /home/jenkins/minikube-integration/16969-3616075/.minikube/files for local assets ...
	I0731 10:47:14.065853 3653627 filesync.go:149] local asset: /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem -> 36214032.pem in /etc/ssl/certs
	I0731 10:47:14.065866 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem -> /etc/ssl/certs/36214032.pem
	I0731 10:47:14.065974 3653627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 10:47:14.076491 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem --> /etc/ssl/certs/36214032.pem (1708 bytes)
	I0731 10:47:14.106244 3653627 start.go:303] post-start completed in 157.87897ms
	I0731 10:47:14.106613 3653627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-947999
	I0731 10:47:14.124161 3653627 profile.go:148] Saving config to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/config.json ...
	I0731 10:47:14.124424 3653627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 10:47:14.124472 3653627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-947999
	I0731 10:47:14.142945 3653627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35358 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/ingress-addon-legacy-947999/id_rsa Username:docker}
	I0731 10:47:14.235100 3653627 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 10:47:14.240561 3653627 start.go:128] duration metric: createHost completed in 9.487145995s
	I0731 10:47:14.240583 3653627 start.go:83] releasing machines lock for "ingress-addon-legacy-947999", held for 9.487285654s
	I0731 10:47:14.240653 3653627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-947999
	I0731 10:47:14.256690 3653627 ssh_runner.go:195] Run: cat /version.json
	I0731 10:47:14.256740 3653627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-947999
	I0731 10:47:14.256984 3653627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 10:47:14.257043 3653627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-947999
	I0731 10:47:14.281311 3653627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35358 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/ingress-addon-legacy-947999/id_rsa Username:docker}
	I0731 10:47:14.306981 3653627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35358 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/ingress-addon-legacy-947999/id_rsa Username:docker}
	I0731 10:47:14.377328 3653627 ssh_runner.go:195] Run: systemctl --version
	I0731 10:47:14.524133 3653627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 10:47:14.529706 3653627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0731 10:47:14.558344 3653627 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0731 10:47:14.558459 3653627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 10:47:14.593603 3653627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 10:47:14.593626 3653627 start.go:466] detecting cgroup driver to use...
	I0731 10:47:14.593657 3653627 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 10:47:14.593735 3653627 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 10:47:14.608572 3653627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 10:47:14.621546 3653627 docker.go:196] disabling cri-docker service (if available) ...
	I0731 10:47:14.621629 3653627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 10:47:14.637245 3653627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 10:47:14.653500 3653627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 10:47:14.758032 3653627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 10:47:14.863103 3653627 docker.go:212] disabling docker service ...
	I0731 10:47:14.863204 3653627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 10:47:14.885534 3653627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 10:47:14.898825 3653627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 10:47:14.991118 3653627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 10:47:15.093249 3653627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 10:47:15.107543 3653627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 10:47:15.129296 3653627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0731 10:47:15.142591 3653627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 10:47:15.155853 3653627 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 10:47:15.156007 3653627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 10:47:15.172332 3653627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 10:47:15.184771 3653627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 10:47:15.197068 3653627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 10:47:15.209162 3653627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 10:47:15.221470 3653627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 10:47:15.233581 3653627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 10:47:15.243942 3653627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 10:47:15.253999 3653627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:47:15.356247 3653627 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 10:47:15.452756 3653627 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0731 10:47:15.452837 3653627 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0731 10:47:15.458040 3653627 start.go:534] Will wait 60s for crictl version
	I0731 10:47:15.458122 3653627 ssh_runner.go:195] Run: which crictl
	I0731 10:47:15.462608 3653627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 10:47:15.507522 3653627 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0731 10:47:15.507594 3653627 ssh_runner.go:195] Run: containerd --version
	I0731 10:47:15.534232 3653627 ssh_runner.go:195] Run: containerd --version
	I0731 10:47:15.565232 3653627 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.21 ...
	I0731 10:47:15.566847 3653627 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-947999 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 10:47:15.583325 3653627 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0731 10:47:15.587939 3653627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 10:47:15.600633 3653627 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0731 10:47:15.600710 3653627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 10:47:15.640286 3653627 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0731 10:47:15.640353 3653627 ssh_runner.go:195] Run: which lz4
	I0731 10:47:15.644669 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0731 10:47:15.644765 3653627 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 10:47:15.648684 3653627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 10:47:15.648710 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I0731 10:47:17.788935 3653627 containerd.go:547] Took 2.144209 seconds to copy over tarball
	I0731 10:47:17.789038 3653627 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 10:47:20.391649 3653627 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.602579934s)
	I0731 10:47:20.391714 3653627 containerd.go:554] Took 2.602757 seconds to extract the tarball
	I0731 10:47:20.391739 3653627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 10:47:20.475770 3653627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:47:20.573148 3653627 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 10:47:20.672057 3653627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 10:47:20.720216 3653627 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 10:47:20.720378 3653627 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0731 10:47:20.720581 3653627 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0731 10:47:20.720656 3653627 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 10:47:20.720729 3653627 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0731 10:47:20.720915 3653627 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0731 10:47:20.720995 3653627 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 10:47:20.721280 3653627 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0731 10:47:20.721471 3653627 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0731 10:47:20.721878 3653627 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0731 10:47:20.722072 3653627 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0731 10:47:20.722207 3653627 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 10:47:20.722952 3653627 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 10:47:20.723413 3653627 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0731 10:47:20.723631 3653627 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0731 10:47:20.724350 3653627 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:47:20.725003 3653627 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W0731 10:47:21.087899 3653627 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0731 10:47:21.088040 3653627 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns:1.6.7"
	W0731 10:47:21.148599 3653627 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0731 10:47:21.149486 3653627 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.18.20"
	I0731 10:47:21.161527 3653627 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.2"
	W0731 10:47:21.166783 3653627 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0731 10:47:21.166970 3653627 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.18.20"
	W0731 10:47:21.170516 3653627 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0731 10:47:21.170699 3653627 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.18.20"
	W0731 10:47:21.183888 3653627 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0731 10:47:21.184079 3653627 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.18.20"
	W0731 10:47:21.209302 3653627 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0731 10:47:21.209531 3653627 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.3-0"
	W0731 10:47:21.401181 3653627 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 10:47:21.401301 3653627 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 10:47:21.532070 3653627 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0731 10:47:21.532111 3653627 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0731 10:47:21.532158 3653627 ssh_runner.go:195] Run: which crictl
	I0731 10:47:21.878363 3653627 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0731 10:47:21.878443 3653627 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0731 10:47:21.878525 3653627 ssh_runner.go:195] Run: which crictl
	I0731 10:47:21.878607 3653627 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0731 10:47:21.878649 3653627 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 10:47:21.878690 3653627 ssh_runner.go:195] Run: which crictl
	I0731 10:47:21.909501 3653627 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0731 10:47:21.909584 3653627 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0731 10:47:21.909660 3653627 ssh_runner.go:195] Run: which crictl
	I0731 10:47:21.909786 3653627 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0731 10:47:21.909824 3653627 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0731 10:47:21.909868 3653627 ssh_runner.go:195] Run: which crictl
	I0731 10:47:21.909965 3653627 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0731 10:47:21.910003 3653627 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 10:47:21.910048 3653627 ssh_runner.go:195] Run: which crictl
	I0731 10:47:21.914735 3653627 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0731 10:47:21.914767 3653627 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0731 10:47:21.914813 3653627 ssh_runner.go:195] Run: which crictl
	I0731 10:47:21.968180 3653627 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 10:47:21.968239 3653627 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:47:21.968295 3653627 ssh_runner.go:195] Run: which crictl
	I0731 10:47:21.968380 3653627 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0731 10:47:21.968414 3653627 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0731 10:47:21.968387 3653627 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 10:47:21.968499 3653627 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 10:47:21.968577 3653627 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0731 10:47:21.968609 3653627 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0731 10:47:21.968635 3653627 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0731 10:47:22.150444 3653627 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0731 10:47:22.150520 3653627 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:47:22.150556 3653627 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0731 10:47:22.150585 3653627 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0731 10:47:22.150611 3653627 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0731 10:47:22.150640 3653627 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0731 10:47:22.150678 3653627 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0731 10:47:22.150711 3653627 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0731 10:47:22.206494 3653627 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 10:47:22.206570 3653627 cache_images.go:92] LoadImages completed in 1.48628763s
	W0731 10:47:22.206628 3653627 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0731 10:47:22.206692 3653627 ssh_runner.go:195] Run: sudo crictl info
	I0731 10:47:22.246337 3653627 cni.go:84] Creating CNI manager for ""
	I0731 10:47:22.246362 3653627 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0731 10:47:22.246373 3653627 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 10:47:22.246391 3653627 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-947999 NodeName:ingress-addon-legacy-947999 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 10:47:22.246514 3653627 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-947999"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 10:47:22.246613 3653627 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-947999 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-947999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 10:47:22.246681 3653627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0731 10:47:22.257522 3653627 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 10:47:22.257592 3653627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 10:47:22.267962 3653627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0731 10:47:22.289151 3653627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0731 10:47:22.310236 3653627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I0731 10:47:22.331346 3653627 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0731 10:47:22.335810 3653627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 10:47:22.348963 3653627 certs.go:56] Setting up /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999 for IP: 192.168.49.2
	I0731 10:47:22.348996 3653627 certs.go:190] acquiring lock for shared ca certs: {Name:mkeee59ed5ac829e33e53e6a4b7b185b15e70a1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:47:22.349190 3653627 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.key
	I0731 10:47:22.349242 3653627 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.key
	I0731 10:47:22.349290 3653627 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.key
	I0731 10:47:22.349322 3653627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt with IP's: []
	I0731 10:47:22.739437 3653627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt ...
	I0731 10:47:22.739467 3653627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: {Name:mkabb356eb6b1f154ed6d0a18248d5357f7f2b91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:47:22.739661 3653627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.key ...
	I0731 10:47:22.739675 3653627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.key: {Name:mk8d52367a14db871e221fe790ceda3af6f956b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:47:22.739782 3653627 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.key.dd3b5fb2
	I0731 10:47:22.739799 3653627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0731 10:47:23.169826 3653627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.crt.dd3b5fb2 ...
	I0731 10:47:23.169856 3653627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.crt.dd3b5fb2: {Name:mk45e42b5c5fb64f94424f1cb657fdc6be8a4c7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:47:23.170037 3653627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.key.dd3b5fb2 ...
	I0731 10:47:23.170049 3653627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.key.dd3b5fb2: {Name:mke9c7ad08675a5d05ba433c7eb6bd5c1441cad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:47:23.170131 3653627 certs.go:337] copying /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.crt
	I0731 10:47:23.170205 3653627 certs.go:341] copying /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.key
	I0731 10:47:23.170272 3653627 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/proxy-client.key
	I0731 10:47:23.170290 3653627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/proxy-client.crt with IP's: []
	I0731 10:47:23.615493 3653627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/proxy-client.crt ...
	I0731 10:47:23.615523 3653627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/proxy-client.crt: {Name:mk4e3397c1540f289b9e957cbfe26eb0569536c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:47:23.615700 3653627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/proxy-client.key ...
	I0731 10:47:23.615713 3653627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/proxy-client.key: {Name:mk6f07b295fd3b15176dd1991b480beef5f7c0a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:47:23.615789 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 10:47:23.615811 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 10:47:23.615823 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 10:47:23.615838 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 10:47:23.615848 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 10:47:23.615863 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 10:47:23.615883 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 10:47:23.615898 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 10:47:23.615950 3653627 certs.go:437] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/3621403.pem (1338 bytes)
	W0731 10:47:23.615990 3653627 certs.go:433] ignoring /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/3621403_empty.pem, impossibly tiny 0 bytes
	I0731 10:47:23.616003 3653627 certs.go:437] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 10:47:23.616033 3653627 certs.go:437] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem (1082 bytes)
	I0731 10:47:23.616061 3653627 certs.go:437] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem (1123 bytes)
	I0731 10:47:23.616093 3653627 certs.go:437] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem (1679 bytes)
	I0731 10:47:23.616139 3653627 certs.go:437] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem (1708 bytes)
	I0731 10:47:23.616169 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem -> /usr/share/ca-certificates/36214032.pem
	I0731 10:47:23.616186 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:47:23.616204 3653627 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/3621403.pem -> /usr/share/ca-certificates/3621403.pem
	I0731 10:47:23.616727 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 10:47:23.644688 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 10:47:23.671843 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 10:47:23.698774 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 10:47:23.725275 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 10:47:23.751402 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 10:47:23.777660 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 10:47:23.804155 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 10:47:23.831176 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem --> /usr/share/ca-certificates/36214032.pem (1708 bytes)
	I0731 10:47:23.857757 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 10:47:23.884044 3653627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/3621403.pem --> /usr/share/ca-certificates/3621403.pem (1338 bytes)
	I0731 10:47:23.910960 3653627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 10:47:23.930621 3653627 ssh_runner.go:195] Run: openssl version
	I0731 10:47:23.937151 3653627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 10:47:23.948282 3653627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:47:23.952543 3653627 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:47:23.952644 3653627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:47:23.960749 3653627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 10:47:23.971807 3653627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3621403.pem && ln -fs /usr/share/ca-certificates/3621403.pem /etc/ssl/certs/3621403.pem"
	I0731 10:47:23.982785 3653627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3621403.pem
	I0731 10:47:23.987331 3653627 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 31 10:43 /usr/share/ca-certificates/3621403.pem
	I0731 10:47:23.987393 3653627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3621403.pem
	I0731 10:47:23.995810 3653627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3621403.pem /etc/ssl/certs/51391683.0"
	I0731 10:47:24.008339 3653627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/36214032.pem && ln -fs /usr/share/ca-certificates/36214032.pem /etc/ssl/certs/36214032.pem"
	I0731 10:47:24.020738 3653627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/36214032.pem
	I0731 10:47:24.025986 3653627 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 31 10:43 /usr/share/ca-certificates/36214032.pem
	I0731 10:47:24.026059 3653627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/36214032.pem
	I0731 10:47:24.034964 3653627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/36214032.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 10:47:24.046860 3653627 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 10:47:24.051399 3653627 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 10:47:24.051452 3653627 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-947999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-947999 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 10:47:24.051545 3653627 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0731 10:47:24.051607 3653627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 10:47:24.093829 3653627 cri.go:89] found id: ""
	I0731 10:47:24.093902 3653627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 10:47:24.105129 3653627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 10:47:24.116154 3653627 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0731 10:47:24.116269 3653627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 10:47:24.127721 3653627 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 10:47:24.127778 3653627 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0731 10:47:24.188676 3653627 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0731 10:47:24.188933 3653627 kubeadm.go:322] [preflight] Running pre-flight checks
	I0731 10:47:24.238575 3653627 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0731 10:47:24.238647 3653627 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1040-aws
	I0731 10:47:24.238687 3653627 kubeadm.go:322] OS: Linux
	I0731 10:47:24.238734 3653627 kubeadm.go:322] CGROUPS_CPU: enabled
	I0731 10:47:24.238783 3653627 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0731 10:47:24.238831 3653627 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0731 10:47:24.238880 3653627 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0731 10:47:24.238936 3653627 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0731 10:47:24.238987 3653627 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0731 10:47:24.326951 3653627 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 10:47:24.327075 3653627 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 10:47:24.327198 3653627 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 10:47:24.552302 3653627 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 10:47:24.554027 3653627 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 10:47:24.554307 3653627 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0731 10:47:24.671445 3653627 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 10:47:24.675589 3653627 out.go:204]   - Generating certificates and keys ...
	I0731 10:47:24.675752 3653627 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0731 10:47:24.675832 3653627 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0731 10:47:25.102180 3653627 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 10:47:25.436881 3653627 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0731 10:47:25.872675 3653627 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0731 10:47:26.078451 3653627 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0731 10:47:26.386520 3653627 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0731 10:47:26.386924 3653627 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-947999 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 10:47:26.681429 3653627 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0731 10:47:26.681843 3653627 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-947999 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 10:47:27.046571 3653627 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 10:47:27.558618 3653627 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 10:47:28.036479 3653627 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0731 10:47:28.036733 3653627 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 10:47:28.527735 3653627 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 10:47:28.991042 3653627 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 10:47:29.540992 3653627 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 10:47:29.795703 3653627 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 10:47:29.796580 3653627 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 10:47:29.799349 3653627 out.go:204]   - Booting up control plane ...
	I0731 10:47:29.799475 3653627 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 10:47:29.805687 3653627 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 10:47:29.807672 3653627 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 10:47:29.809851 3653627 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 10:47:29.813356 3653627 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 10:47:41.818339 3653627 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.004361 seconds
	I0731 10:47:41.818488 3653627 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 10:47:41.831153 3653627 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 10:47:42.355713 3653627 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 10:47:42.355902 3653627 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-947999 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0731 10:47:42.863675 3653627 kubeadm.go:322] [bootstrap-token] Using token: e4isyn.hv6ps6wcz73n8to6
	I0731 10:47:42.865622 3653627 out.go:204]   - Configuring RBAC rules ...
	I0731 10:47:42.865751 3653627 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 10:47:42.871141 3653627 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 10:47:42.878447 3653627 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 10:47:42.881402 3653627 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 10:47:42.883931 3653627 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 10:47:42.886507 3653627 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 10:47:42.895957 3653627 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 10:47:43.367030 3653627 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0731 10:47:43.412618 3653627 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0731 10:47:43.414250 3653627 kubeadm.go:322] 
	I0731 10:47:43.414318 3653627 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0731 10:47:43.414324 3653627 kubeadm.go:322] 
	I0731 10:47:43.414396 3653627 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0731 10:47:43.414401 3653627 kubeadm.go:322] 
	I0731 10:47:43.414425 3653627 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0731 10:47:43.414480 3653627 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 10:47:43.414528 3653627 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 10:47:43.414532 3653627 kubeadm.go:322] 
	I0731 10:47:43.414581 3653627 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0731 10:47:43.414651 3653627 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 10:47:43.414715 3653627 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 10:47:43.414719 3653627 kubeadm.go:322] 
	I0731 10:47:43.414797 3653627 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 10:47:43.414869 3653627 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0731 10:47:43.414890 3653627 kubeadm.go:322] 
	I0731 10:47:43.414969 3653627 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token e4isyn.hv6ps6wcz73n8to6 \
	I0731 10:47:43.415070 3653627 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:86a59b46a66ac234bd53b6c72750e3c62130510b828ccfbf571d11f4fbb3f8f1 \
	I0731 10:47:43.415092 3653627 kubeadm.go:322]     --control-plane 
	I0731 10:47:43.415096 3653627 kubeadm.go:322] 
	I0731 10:47:43.415175 3653627 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0731 10:47:43.415179 3653627 kubeadm.go:322] 
	I0731 10:47:43.415256 3653627 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token e4isyn.hv6ps6wcz73n8to6 \
	I0731 10:47:43.415353 3653627 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:86a59b46a66ac234bd53b6c72750e3c62130510b828ccfbf571d11f4fbb3f8f1 
	I0731 10:47:43.418799 3653627 kubeadm.go:322] W0731 10:47:24.188090    1106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0731 10:47:43.419010 3653627 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1040-aws\n", err: exit status 1
	I0731 10:47:43.419111 3653627 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 10:47:43.419231 3653627 kubeadm.go:322] W0731 10:47:29.805612    1106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0731 10:47:43.419347 3653627 kubeadm.go:322] W0731 10:47:29.808322    1106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0731 10:47:43.419360 3653627 cni.go:84] Creating CNI manager for ""
	I0731 10:47:43.419367 3653627 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0731 10:47:43.423204 3653627 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 10:47:43.425341 3653627 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 10:47:43.430112 3653627 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0731 10:47:43.430132 3653627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 10:47:43.453470 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 10:47:43.904620 3653627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 10:47:43.904738 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:43.904803 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=a7848ba25aaaad8ebb50e721c0d343e471188fc7 minikube.k8s.io/name=ingress-addon-legacy-947999 minikube.k8s.io/updated_at=2023_07_31T10_47_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:44.058793 3653627 ops.go:34] apiserver oom_adj: -16
	I0731 10:47:44.058879 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:44.152909 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:44.754305 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:45.254174 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:45.754331 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:46.254432 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:46.754360 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:47.254729 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:47.754577 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:48.254416 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:48.753960 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:49.254716 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:49.753919 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:50.253927 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:50.754288 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:51.254209 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:51.754435 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:52.254300 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:52.754027 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:53.254461 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:53.754433 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:54.254010 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:54.753891 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:55.253866 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:55.753955 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:56.254801 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:56.753916 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:57.254600 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:57.754468 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:58.254781 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:58.753904 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:59.253968 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:47:59.754738 3653627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:48:00.083773 3653627 kubeadm.go:1081] duration metric: took 16.179072972s to wait for elevateKubeSystemPrivileges.
	I0731 10:48:00.083813 3653627 kubeadm.go:406] StartCluster complete in 36.03236602s
	I0731 10:48:00.083839 3653627 settings.go:142] acquiring lock: {Name:mk7385413106a9bc6c5ba9de86edde2c8dc9b1b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:48:00.083926 3653627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16969-3616075/kubeconfig
	I0731 10:48:00.084821 3653627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/kubeconfig: {Name:mkbf88964f408983a815b4e4688fb8f882a1e0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:48:00.085298 3653627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 10:48:00.085237 3653627 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0731 10:48:00.085416 3653627 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-947999"
	I0731 10:48:00.085433 3653627 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-947999"
	I0731 10:48:00.085480 3653627 host.go:66] Checking if "ingress-addon-legacy-947999" exists ...
	I0731 10:48:00.085573 3653627 config.go:182] Loaded profile config "ingress-addon-legacy-947999": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0731 10:48:00.085637 3653627 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-947999"
	I0731 10:48:00.085654 3653627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-947999"
	I0731 10:48:00.085957 3653627 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-947999 --format={{.State.Status}}
	I0731 10:48:00.086056 3653627 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-947999 --format={{.State.Status}}
	I0731 10:48:00.088615 3653627 kapi.go:59] client config for ingress-addon-legacy-947999: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt", KeyFile:"/home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.key", CAFile:"/home/jenkins/minikube-integration/16969-3616075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e64f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 10:48:00.090349 3653627 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 10:48:00.157995 3653627 kapi.go:59] client config for ingress-addon-legacy-947999: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt", KeyFile:"/home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.key", CAFile:"/home/jenkins/minikube-integration/16969-3616075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e64f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 10:48:00.167179 3653627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:48:00.169454 3653627 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:48:00.169480 3653627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 10:48:00.169562 3653627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-947999
	I0731 10:48:00.188666 3653627 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-947999" context rescaled to 1 replicas
	I0731 10:48:00.188711 3653627 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0731 10:48:00.191505 3653627 out.go:177] * Verifying Kubernetes components...
	I0731 10:48:00.194281 3653627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 10:48:00.192608 3653627 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-947999"
	I0731 10:48:00.194524 3653627 host.go:66] Checking if "ingress-addon-legacy-947999" exists ...
	I0731 10:48:00.195801 3653627 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-947999 --format={{.State.Status}}
	I0731 10:48:00.204187 3653627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35358 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/ingress-addon-legacy-947999/id_rsa Username:docker}
	I0731 10:48:00.229507 3653627 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 10:48:00.229532 3653627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 10:48:00.229600 3653627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-947999
	I0731 10:48:00.256022 3653627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35358 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/ingress-addon-legacy-947999/id_rsa Username:docker}
	I0731 10:48:00.527079 3653627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 10:48:00.539854 3653627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 10:48:00.540574 3653627 kapi.go:59] client config for ingress-addon-legacy-947999: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt", KeyFile:"/home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.key", CAFile:"/home/jenkins/minikube-integration/16969-3616075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e64f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 10:48:00.540938 3653627 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-947999" to be "Ready" ...
	I0731 10:48:00.544584 3653627 node_ready.go:49] node "ingress-addon-legacy-947999" has status "Ready":"True"
	I0731 10:48:00.544645 3653627 node_ready.go:38] duration metric: took 3.66613ms waiting for node "ingress-addon-legacy-947999" to be "Ready" ...
	I0731 10:48:00.544670 3653627 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 10:48:00.554035 3653627 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-fm5j2" in "kube-system" namespace to be "Ready" ...
	I0731 10:48:00.625384 3653627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:48:01.216901 3653627 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0731 10:48:01.218822 3653627 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0731 10:48:01.220645 3653627 addons.go:502] enable addons completed in 1.135406702s: enabled=[default-storageclass storage-provisioner]
	I0731 10:48:02.565095 3653627 pod_ready.go:102] pod "coredns-66bff467f8-fm5j2" in "kube-system" namespace has status "Ready":"False"
	I0731 10:48:04.565680 3653627 pod_ready.go:102] pod "coredns-66bff467f8-fm5j2" in "kube-system" namespace has status "Ready":"False"
	I0731 10:48:06.566318 3653627 pod_ready.go:102] pod "coredns-66bff467f8-fm5j2" in "kube-system" namespace has status "Ready":"False"
	I0731 10:48:09.065423 3653627 pod_ready.go:102] pod "coredns-66bff467f8-fm5j2" in "kube-system" namespace has status "Ready":"False"
	I0731 10:48:11.066497 3653627 pod_ready.go:102] pod "coredns-66bff467f8-fm5j2" in "kube-system" namespace has status "Ready":"False"
	I0731 10:48:13.567367 3653627 pod_ready.go:102] pod "coredns-66bff467f8-fm5j2" in "kube-system" namespace has status "Ready":"False"
	I0731 10:48:16.065508 3653627 pod_ready.go:102] pod "coredns-66bff467f8-fm5j2" in "kube-system" namespace has status "Ready":"False"
	I0731 10:48:17.066117 3653627 pod_ready.go:92] pod "coredns-66bff467f8-fm5j2" in "kube-system" namespace has status "Ready":"True"
	I0731 10:48:17.066137 3653627 pod_ready.go:81] duration metric: took 16.512038916s waiting for pod "coredns-66bff467f8-fm5j2" in "kube-system" namespace to be "Ready" ...
	I0731 10:48:17.066148 3653627 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-947999" in "kube-system" namespace to be "Ready" ...
	I0731 10:48:17.072252 3653627 pod_ready.go:92] pod "etcd-ingress-addon-legacy-947999" in "kube-system" namespace has status "Ready":"True"
	I0731 10:48:17.072272 3653627 pod_ready.go:81] duration metric: took 6.116753ms waiting for pod "etcd-ingress-addon-legacy-947999" in "kube-system" namespace to be "Ready" ...
	I0731 10:48:17.072284 3653627 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-947999" in "kube-system" namespace to be "Ready" ...
	I0731 10:48:17.078308 3653627 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-947999" in "kube-system" namespace has status "Ready":"True"
	I0731 10:48:17.078326 3653627 pod_ready.go:81] duration metric: took 6.035334ms waiting for pod "kube-apiserver-ingress-addon-legacy-947999" in "kube-system" namespace to be "Ready" ...
	I0731 10:48:17.078337 3653627 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-947999" in "kube-system" namespace to be "Ready" ...
	I0731 10:48:17.084448 3653627 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-947999" in "kube-system" namespace has status "Ready":"True"
	I0731 10:48:17.084518 3653627 pod_ready.go:81] duration metric: took 6.168922ms waiting for pod "kube-controller-manager-ingress-addon-legacy-947999" in "kube-system" namespace to be "Ready" ...
	I0731 10:48:17.084543 3653627 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vj278" in "kube-system" namespace to be "Ready" ...
	I0731 10:48:17.090377 3653627 pod_ready.go:92] pod "kube-proxy-vj278" in "kube-system" namespace has status "Ready":"True"
	I0731 10:48:17.090401 3653627 pod_ready.go:81] duration metric: took 5.837878ms waiting for pod "kube-proxy-vj278" in "kube-system" namespace to be "Ready" ...
	I0731 10:48:17.090410 3653627 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-947999" in "kube-system" namespace to be "Ready" ...
	I0731 10:48:17.260631 3653627 request.go:628] Waited for 170.164252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-947999
	I0731 10:48:17.460636 3653627 request.go:628] Waited for 197.267344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-947999
	I0731 10:48:17.463240 3653627 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-947999" in "kube-system" namespace has status "Ready":"True"
	I0731 10:48:17.463263 3653627 pod_ready.go:81] duration metric: took 372.845442ms waiting for pod "kube-scheduler-ingress-addon-legacy-947999" in "kube-system" namespace to be "Ready" ...
	I0731 10:48:17.463272 3653627 pod_ready.go:38] duration metric: took 16.918558158s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 10:48:17.463304 3653627 api_server.go:52] waiting for apiserver process to appear ...
	I0731 10:48:17.463377 3653627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:48:17.476221 3653627 api_server.go:72] duration metric: took 17.287476654s to wait for apiserver process to appear ...
	I0731 10:48:17.476292 3653627 api_server.go:88] waiting for apiserver healthz status ...
	I0731 10:48:17.476321 3653627 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0731 10:48:17.485264 3653627 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0731 10:48:17.486072 3653627 api_server.go:141] control plane version: v1.18.20
	I0731 10:48:17.486096 3653627 api_server.go:131] duration metric: took 9.789825ms to wait for apiserver health ...
	I0731 10:48:17.486106 3653627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 10:48:17.661496 3653627 request.go:628] Waited for 175.305245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0731 10:48:17.667516 3653627 system_pods.go:59] 8 kube-system pods found
	I0731 10:48:17.667550 3653627 system_pods.go:61] "coredns-66bff467f8-fm5j2" [1521849c-b60c-470b-888c-8fab872679a1] Running
	I0731 10:48:17.667558 3653627 system_pods.go:61] "etcd-ingress-addon-legacy-947999" [5e155c53-66c4-4817-a249-dee256ede12a] Running
	I0731 10:48:17.667563 3653627 system_pods.go:61] "kindnet-pj9fj" [60af9747-45b3-42b9-8ff4-adfc5168b204] Running
	I0731 10:48:17.667593 3653627 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-947999" [fe684ef2-cc25-47dd-82f6-7ebf856a9741] Running
	I0731 10:48:17.667599 3653627 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-947999" [c2dddad7-1bde-4f4d-ac1e-1977c10ee278] Running
	I0731 10:48:17.667607 3653627 system_pods.go:61] "kube-proxy-vj278" [8ad54a5e-eac1-46aa-82db-b2e1c833f8fd] Running
	I0731 10:48:17.667612 3653627 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-947999" [d195b1f9-fdf6-445d-a51a-cbcbf1321dd2] Running
	I0731 10:48:17.667620 3653627 system_pods.go:61] "storage-provisioner" [9c37a23e-a31b-487f-b8f1-c82ada3d8f47] Running
	I0731 10:48:17.667626 3653627 system_pods.go:74] duration metric: took 181.515013ms to wait for pod list to return data ...
	I0731 10:48:17.667634 3653627 default_sa.go:34] waiting for default service account to be created ...
	I0731 10:48:17.861032 3653627 request.go:628] Waited for 193.319441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0731 10:48:17.863570 3653627 default_sa.go:45] found service account: "default"
	I0731 10:48:17.863594 3653627 default_sa.go:55] duration metric: took 195.949232ms for default service account to be created ...
	I0731 10:48:17.863603 3653627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 10:48:18.061047 3653627 request.go:628] Waited for 197.345301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0731 10:48:18.067449 3653627 system_pods.go:86] 8 kube-system pods found
	I0731 10:48:18.067481 3653627 system_pods.go:89] "coredns-66bff467f8-fm5j2" [1521849c-b60c-470b-888c-8fab872679a1] Running
	I0731 10:48:18.067488 3653627 system_pods.go:89] "etcd-ingress-addon-legacy-947999" [5e155c53-66c4-4817-a249-dee256ede12a] Running
	I0731 10:48:18.067494 3653627 system_pods.go:89] "kindnet-pj9fj" [60af9747-45b3-42b9-8ff4-adfc5168b204] Running
	I0731 10:48:18.067521 3653627 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-947999" [fe684ef2-cc25-47dd-82f6-7ebf856a9741] Running
	I0731 10:48:18.067534 3653627 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-947999" [c2dddad7-1bde-4f4d-ac1e-1977c10ee278] Running
	I0731 10:48:18.067539 3653627 system_pods.go:89] "kube-proxy-vj278" [8ad54a5e-eac1-46aa-82db-b2e1c833f8fd] Running
	I0731 10:48:18.067552 3653627 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-947999" [d195b1f9-fdf6-445d-a51a-cbcbf1321dd2] Running
	I0731 10:48:18.067558 3653627 system_pods.go:89] "storage-provisioner" [9c37a23e-a31b-487f-b8f1-c82ada3d8f47] Running
	I0731 10:48:18.067565 3653627 system_pods.go:126] duration metric: took 203.956675ms to wait for k8s-apps to be running ...
	I0731 10:48:18.067576 3653627 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 10:48:18.067656 3653627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 10:48:18.082049 3653627 system_svc.go:56] duration metric: took 14.462191ms WaitForService to wait for kubelet.
	I0731 10:48:18.082087 3653627 kubeadm.go:581] duration metric: took 17.893338217s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 10:48:18.082134 3653627 node_conditions.go:102] verifying NodePressure condition ...
	I0731 10:48:18.261496 3653627 request.go:628] Waited for 179.292122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0731 10:48:18.264434 3653627 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0731 10:48:18.264464 3653627 node_conditions.go:123] node cpu capacity is 2
	I0731 10:48:18.264475 3653627 node_conditions.go:105] duration metric: took 182.335147ms to run NodePressure ...
	I0731 10:48:18.264485 3653627 start.go:228] waiting for startup goroutines ...
	I0731 10:48:18.264519 3653627 start.go:233] waiting for cluster config update ...
	I0731 10:48:18.264535 3653627 start.go:242] writing updated cluster config ...
	I0731 10:48:18.264855 3653627 ssh_runner.go:195] Run: rm -f paused
	I0731 10:48:18.323645 3653627 start.go:596] kubectl: 1.27.4, cluster: 1.18.20 (minor skew: 9)
	I0731 10:48:18.325386 3653627 out.go:177] 
	W0731 10:48:18.327307 3653627 out.go:239] ! /usr/local/bin/kubectl is version 1.27.4, which may have incompatibilities with Kubernetes 1.18.20.
	I0731 10:48:18.328951 3653627 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0731 10:48:18.330590 3653627 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-947999" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7cf6da3d05fbd       13753a81eccfd       13 seconds ago       Exited              hello-world-app           2                   65499dc40c56b       hello-world-app-5f5d8b66bb-8tz2j
	fcebc04b2dc8b       66bf2c914bf4d       40 seconds ago       Running             nginx                     0                   d1f2f9f1cef23       nginx
	edd390d6f5736       d7f0cba3aa5bf       56 seconds ago       Exited              controller                0                   c7772112bf778       ingress-nginx-controller-7fcf777cb7-rtkql
	1e11475e5556e       a883f7fc35610       About a minute ago   Exited              patch                     0                   85e0540c2941c       ingress-nginx-admission-patch-kcgkt
	3ddf0386a0702       a883f7fc35610       About a minute ago   Exited              create                    0                   a4668ca456cc8       ingress-nginx-admission-create-7br94
	0390bc2e3ce56       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   6bdd8169d7e86       coredns-66bff467f8-fm5j2
	124b5774411c5       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   397db1633bc5c       storage-provisioner
	6a016c27f6516       b18bf71b941ba       About a minute ago   Running             kindnet-cni               0                   d205f2f4d3614       kindnet-pj9fj
	fdd0b77262ba2       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   4e2ba714f4b87       kube-proxy-vj278
	eebb87c21e571       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   ae0e90d04d860       kube-apiserver-ingress-addon-legacy-947999
	b85cd726c526a       095f37015706d       About a minute ago   Running             kube-scheduler            0                   27c899f82ae23       kube-scheduler-ingress-addon-legacy-947999
	b8ade27f88423       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   c6257a984fabc       kube-controller-manager-ingress-addon-legacy-947999
	c8f6d78785228       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   e5f8183dd0d5f       etcd-ingress-addon-legacy-947999
	
	* 
	* ==> containerd <==
	* Jul 31 10:49:10 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:10.031859574Z" level=info msg="RemoveContainer for \"e795a9dbda8358d50d630e24844ae97bb016d62ccdd476907f98cf33fe4f07ee\" returns successfully"
	Jul 31 10:49:15 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:15.639514352Z" level=info msg="StopContainer for \"edd390d6f5736bcff8745e7646c2b1a54fb47f98ef76369ed37ce47d664ee302\" with timeout 2 (s)"
	Jul 31 10:49:15 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:15.639893305Z" level=info msg="Stop container \"edd390d6f5736bcff8745e7646c2b1a54fb47f98ef76369ed37ce47d664ee302\" with signal terminated"
	Jul 31 10:49:15 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:15.657031432Z" level=info msg="StopContainer for \"edd390d6f5736bcff8745e7646c2b1a54fb47f98ef76369ed37ce47d664ee302\" with timeout 2 (s)"
	Jul 31 10:49:15 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:15.657485889Z" level=info msg="Skipping the sending of signal terminated to container \"edd390d6f5736bcff8745e7646c2b1a54fb47f98ef76369ed37ce47d664ee302\" because a prior stop with timeout>0 request already sent the signal"
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.650874841Z" level=info msg="Kill container \"edd390d6f5736bcff8745e7646c2b1a54fb47f98ef76369ed37ce47d664ee302\""
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.658588500Z" level=info msg="Kill container \"edd390d6f5736bcff8745e7646c2b1a54fb47f98ef76369ed37ce47d664ee302\""
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.716278601Z" level=info msg="shim disconnected" id=edd390d6f5736bcff8745e7646c2b1a54fb47f98ef76369ed37ce47d664ee302
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.716341099Z" level=warning msg="cleaning up after shim disconnected" id=edd390d6f5736bcff8745e7646c2b1a54fb47f98ef76369ed37ce47d664ee302 namespace=k8s.io
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.716354170Z" level=info msg="cleaning up dead shim"
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.727368741Z" level=warning msg="cleanup warnings time=\"2023-07-31T10:49:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4610 runtime=io.containerd.runc.v2\n"
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.729932431Z" level=info msg="StopContainer for \"edd390d6f5736bcff8745e7646c2b1a54fb47f98ef76369ed37ce47d664ee302\" returns successfully"
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.729934097Z" level=info msg="StopContainer for \"edd390d6f5736bcff8745e7646c2b1a54fb47f98ef76369ed37ce47d664ee302\" returns successfully"
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.730694465Z" level=info msg="StopPodSandbox for \"c7772112bf77881b4683935486e2a709856f66dfe2ab6de2b68456b6ad82d5f7\""
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.730768812Z" level=info msg="Container to stop \"edd390d6f5736bcff8745e7646c2b1a54fb47f98ef76369ed37ce47d664ee302\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.730704361Z" level=info msg="StopPodSandbox for \"c7772112bf77881b4683935486e2a709856f66dfe2ab6de2b68456b6ad82d5f7\""
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.730956201Z" level=info msg="Container to stop \"edd390d6f5736bcff8745e7646c2b1a54fb47f98ef76369ed37ce47d664ee302\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.768750277Z" level=info msg="shim disconnected" id=c7772112bf77881b4683935486e2a709856f66dfe2ab6de2b68456b6ad82d5f7
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.768813563Z" level=warning msg="cleaning up after shim disconnected" id=c7772112bf77881b4683935486e2a709856f66dfe2ab6de2b68456b6ad82d5f7 namespace=k8s.io
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.768826872Z" level=info msg="cleaning up dead shim"
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.784566879Z" level=warning msg="cleanup warnings time=\"2023-07-31T10:49:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4648 runtime=io.containerd.runc.v2\n"
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.837398079Z" level=info msg="TearDown network for sandbox \"c7772112bf77881b4683935486e2a709856f66dfe2ab6de2b68456b6ad82d5f7\" successfully"
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.837445210Z" level=info msg="StopPodSandbox for \"c7772112bf77881b4683935486e2a709856f66dfe2ab6de2b68456b6ad82d5f7\" returns successfully"
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.857093497Z" level=info msg="TearDown network for sandbox \"c7772112bf77881b4683935486e2a709856f66dfe2ab6de2b68456b6ad82d5f7\" successfully"
	Jul 31 10:49:17 ingress-addon-legacy-947999 containerd[825]: time="2023-07-31T10:49:17.857173324Z" level=info msg="StopPodSandbox for \"c7772112bf77881b4683935486e2a709856f66dfe2ab6de2b68456b6ad82d5f7\" returns successfully"
	
	* 
	* ==> coredns [0390bc2e3ce56776eb83f1878e7ce59be8788bcb34c8cf2243b3e19f2f7d2bc2] <==
	* [INFO] 10.244.0.5:54340 - 39996 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003238725s
	[INFO] 10.244.0.5:51451 - 22091 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031811s
	[INFO] 10.244.0.5:51451 - 51233 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038777s
	[INFO] 10.244.0.5:54340 - 21475 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000099972s
	[INFO] 10.244.0.5:51451 - 10467 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00134989s
	[INFO] 10.244.0.5:51451 - 63632 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000955699s
	[INFO] 10.244.0.5:51451 - 52880 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000063154s
	[INFO] 10.244.0.5:34535 - 43648 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000068619s
	[INFO] 10.244.0.5:34535 - 29495 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000036513s
	[INFO] 10.244.0.5:34535 - 3255 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036661s
	[INFO] 10.244.0.5:34535 - 21747 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032049s
	[INFO] 10.244.0.5:34535 - 13752 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031335s
	[INFO] 10.244.0.5:34535 - 6499 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030983s
	[INFO] 10.244.0.5:52080 - 15775 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.001037086s
	[INFO] 10.244.0.5:52080 - 8789 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000035823s
	[INFO] 10.244.0.5:52080 - 3272 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000195848s
	[INFO] 10.244.0.5:34535 - 26892 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006665791s
	[INFO] 10.244.0.5:52080 - 51790 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000175508s
	[INFO] 10.244.0.5:52080 - 19038 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000148628s
	[INFO] 10.244.0.5:34535 - 2681 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00089901s
	[INFO] 10.244.0.5:52080 - 4415 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003538s
	[INFO] 10.244.0.5:34535 - 18226 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000024919s
	[INFO] 10.244.0.5:52080 - 6704 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001013168s
	[INFO] 10.244.0.5:52080 - 62720 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000849706s
	[INFO] 10.244.0.5:52080 - 43793 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036176s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-947999
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-947999
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a7848ba25aaaad8ebb50e721c0d343e471188fc7
	                    minikube.k8s.io/name=ingress-addon-legacy-947999
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_31T10_47_43_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 10:47:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-947999
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 10:49:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 10:49:16 +0000   Mon, 31 Jul 2023 10:47:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 10:49:16 +0000   Mon, 31 Jul 2023 10:47:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 10:49:16 +0000   Mon, 31 Jul 2023 10:47:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 10:49:16 +0000   Mon, 31 Jul 2023 10:47:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-947999
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee7d6f814c7d48a0b68cfbd82cd3b240
	  System UUID:                1a9ee5df-aaa1-4345-884c-6f32bbd56671
	  Boot ID:                    db857c45-c57f-400d-ae31-7370edb43af7
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-8tz2j                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 coredns-66bff467f8-fm5j2                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     84s
	  kube-system                 etcd-ingress-addon-legacy-947999                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kindnet-pj9fj                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      84s
	  kube-system                 kube-apiserver-ingress-addon-legacy-947999             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-947999    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-proxy-vj278                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-scheduler-ingress-addon-legacy-947999             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeAllocatableEnforced  111s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  110s (x4 over 111s)  kubelet     Node ingress-addon-legacy-947999 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s (x4 over 111s)  kubelet     Node ingress-addon-legacy-947999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s (x3 over 111s)  kubelet     Node ingress-addon-legacy-947999 status is now: NodeHasSufficientPID
	  Normal  Starting                 97s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s                  kubelet     Node ingress-addon-legacy-947999 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s                  kubelet     Node ingress-addon-legacy-947999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s                  kubelet     Node ingress-addon-legacy-947999 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  97s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                87s                  kubelet     Node ingress-addon-legacy-947999 status is now: NodeReady
	  Normal  Starting                 83s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001132] FS-Cache: O-key=[8] '14475c0100000000'
	[  +0.000715] FS-Cache: N-cookie c=0000009c [p=00000093 fl=2 nc=0 na=1]
	[  +0.000955] FS-Cache: N-cookie d=00000000a94fc6de{9p.inode} n=00000000ec69e0f1
	[  +0.001086] FS-Cache: N-key=[8] '14475c0100000000'
	[  +0.002815] FS-Cache: Duplicate cookie detected
	[  +0.000698] FS-Cache: O-cookie c=00000096 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001025] FS-Cache: O-cookie d=00000000a94fc6de{9p.inode} n=0000000023b8fd7c
	[  +0.001072] FS-Cache: O-key=[8] '14475c0100000000'
	[  +0.000724] FS-Cache: N-cookie c=0000009d [p=00000093 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000a94fc6de{9p.inode} n=0000000037c70260
	[  +0.001110] FS-Cache: N-key=[8] '14475c0100000000'
	[  +1.689708] FS-Cache: Duplicate cookie detected
	[  +0.000773] FS-Cache: O-cookie c=00000094 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001038] FS-Cache: O-cookie d=00000000a94fc6de{9p.inode} n=00000000011fd128
	[  +0.001150] FS-Cache: O-key=[8] '13475c0100000000'
	[  +0.000732] FS-Cache: N-cookie c=0000009f [p=00000093 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000a94fc6de{9p.inode} n=0000000065a709cb
	[  +0.001054] FS-Cache: N-key=[8] '13475c0100000000'
	[  +0.343766] FS-Cache: Duplicate cookie detected
	[  +0.000720] FS-Cache: O-cookie c=00000099 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001003] FS-Cache: O-cookie d=00000000a94fc6de{9p.inode} n=00000000f52480e8
	[  +0.001080] FS-Cache: O-key=[8] '19475c0100000000'
	[  +0.000738] FS-Cache: N-cookie c=000000a0 [p=00000093 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000a94fc6de{9p.inode} n=000000007e1f00b7
	[  +0.001196] FS-Cache: N-key=[8] '19475c0100000000'
	
	* 
	* ==> etcd [c8f6d787852281f3c9fddd307131df97cf46a0e816a7d43eaf00e9c871ac475d] <==
	* raft2023/07/31 10:47:33 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/07/31 10:47:33 INFO: aec36adc501070cc became follower at term 1
	raft2023/07/31 10:47:33 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-31 10:47:33.914036 W | auth: simple token is not cryptographically signed
	2023-07-31 10:47:33.934941 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-07-31 10:47:33.936236 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-07-31 10:47:33.949206 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-31 10:47:33.949373 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-31 10:47:33.949514 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/07/31 10:47:33 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-31 10:47:33.949917 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/07/31 10:47:34 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/07/31 10:47:34 INFO: aec36adc501070cc became candidate at term 2
	raft2023/07/31 10:47:34 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/07/31 10:47:34 INFO: aec36adc501070cc became leader at term 2
	raft2023/07/31 10:47:34 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-07-31 10:47:34.049037 I | etcdserver: published {Name:ingress-addon-legacy-947999 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-07-31 10:47:34.067392 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-31 10:47:34.069447 I | embed: ready to serve client requests
	2023-07-31 10:47:34.071460 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-31 10:47:34.071518 I | embed: ready to serve client requests
	2023-07-31 10:47:34.077514 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-31 10:47:34.077797 I | etcdserver/api: enabled capabilities for version 3.4
	2023-07-31 10:47:34.265273 I | embed: serving client requests on 192.168.49.2:2379
	2023-07-31 10:47:35.868331 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 " with result "range_response_count:0 size:4" took too long (112.911961ms) to execute
	
	* 
	* ==> kernel <==
	*  10:49:23 up 18:31,  0 users,  load average: 0.96, 1.45, 2.15
	Linux ingress-addon-legacy-947999 5.15.0-1040-aws #45~20.04.1-Ubuntu SMP Tue Jul 11 19:11:12 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [6a016c27f65160c33fd3dbcd219dc5477ab17ae40dff7763ee102fa3959c3d67] <==
	* I0731 10:48:02.237811       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0731 10:48:02.237870       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0731 10:48:02.238059       1 main.go:116] setting mtu 1500 for CNI 
	I0731 10:48:02.238076       1 main.go:146] kindnetd IP family: "ipv4"
	I0731 10:48:02.238089       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0731 10:48:02.636234       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:48:02.636264       1 main.go:227] handling current node
	I0731 10:48:12.736682       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:48:12.736709       1 main.go:227] handling current node
	I0731 10:48:22.754540       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:48:22.754573       1 main.go:227] handling current node
	I0731 10:48:32.758039       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:48:32.758064       1 main.go:227] handling current node
	I0731 10:48:42.768181       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:48:42.768206       1 main.go:227] handling current node
	I0731 10:48:52.780478       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:48:52.780504       1 main.go:227] handling current node
	I0731 10:49:02.783891       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:49:02.783923       1 main.go:227] handling current node
	I0731 10:49:12.786971       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:49:12.787003       1 main.go:227] handling current node
	I0731 10:49:22.792555       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:49:22.792583       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [eebb87c21e571af328fdc18df6e185f9156dbf5eefad565f040fb1db57bb5e69] <==
	* I0731 10:47:40.358284       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E0731 10:47:40.473286       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0731 10:47:40.550006       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0731 10:47:40.550044       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 10:47:40.550069       1 cache.go:39] Caches are synced for autoregister controller
	I0731 10:47:40.560352       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0731 10:47:40.652467       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 10:47:41.343064       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0731 10:47:41.343101       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 10:47:41.350533       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0731 10:47:41.357614       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0731 10:47:41.357636       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0731 10:47:41.730182       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 10:47:41.775187       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0731 10:47:41.889469       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0731 10:47:41.890655       1 controller.go:609] quota admission added evaluator for: endpoints
	I0731 10:47:41.894086       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 10:47:42.809932       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0731 10:47:43.266475       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0731 10:47:43.399057       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0731 10:47:46.669578       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 10:47:59.737958       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0731 10:47:59.815229       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0731 10:48:19.150570       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0731 10:48:40.564098       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [b8ade27f8842379d5bfdf57b87e5f2c513a29571187dc22dab75a6a1764d0a0f] <==
	* I0731 10:48:00.055152       1 disruption.go:339] Sending events to api server.
	I0731 10:48:00.056203       1 shared_informer.go:230] Caches are synced for stateful set 
	I0731 10:48:00.117739       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0731 10:48:00.191145       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"865d07a2-e74e-40b2-a2d5-a66117cfb7c2", APIVersion:"apps/v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0731 10:48:00.217684       1 shared_informer.go:230] Caches are synced for resource quota 
	I0731 10:48:00.261217       1 shared_informer.go:230] Caches are synced for taint 
	I0731 10:48:00.261347       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0731 10:48:00.261392       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-947999. Assuming now as a timestamp.
	I0731 10:48:00.261432       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0731 10:48:00.261738       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0731 10:48:00.262114       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-947999", UID:"b023c26b-f766-4054-b645-a1c7ca0efb92", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-947999 event: Registered Node ingress-addon-legacy-947999 in Controller
	I0731 10:48:00.264400       1 shared_informer.go:230] Caches are synced for resource quota 
	I0731 10:48:00.287987       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"91afb8d8-eaf4-494e-84dc-eddfa1871d07", APIVersion:"apps/v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-fqqb5
	I0731 10:48:00.312286       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0731 10:48:00.312313       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0731 10:48:00.358911       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0731 10:48:19.135403       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f8eb42f1-1429-49fd-becf-79a0be5045e3", APIVersion:"apps/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0731 10:48:19.165575       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"5bdb6474-1c08-47ff-ab9f-0fbdc5333b5c", APIVersion:"apps/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-rtkql
	I0731 10:48:19.199889       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"df104a21-2f63-4351-866a-a25640a3b940", APIVersion:"batch/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-7br94
	I0731 10:48:19.248118       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f332e8f2-9afe-4fc5-8b05-fb33f71fbfcb", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-kcgkt
	I0731 10:48:21.917896       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"df104a21-2f63-4351-866a-a25640a3b940", APIVersion:"batch/v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0731 10:48:21.944440       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f332e8f2-9afe-4fc5-8b05-fb33f71fbfcb", APIVersion:"batch/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0731 10:48:51.286393       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"93b89444-a320-4c38-989d-ebd149c16ec6", APIVersion:"apps/v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-8tz2j
	I0731 10:48:51.286427       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"47a722a4-b542-478a-a74c-c595fbff3bb0", APIVersion:"apps/v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	E0731 10:49:20.308918       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-m6fxz" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [fdd0b77262ba294fefa326f87a9799f7ec45da7d34dcb43291b0c4b678f52c24] <==
	* W0731 10:48:00.698625       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0731 10:48:00.715242       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0731 10:48:00.715292       1 server_others.go:186] Using iptables Proxier.
	I0731 10:48:00.715589       1 server.go:583] Version: v1.18.20
	I0731 10:48:00.717716       1 config.go:315] Starting service config controller
	I0731 10:48:00.717752       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0731 10:48:00.717816       1 config.go:133] Starting endpoints config controller
	I0731 10:48:00.717820       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0731 10:48:00.825708       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0731 10:48:00.825801       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [b85cd726c526a9acfb17b1c208cd67dc928fcd602226cbfe32ff73a694e708f2] <==
	* W0731 10:47:40.535897       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 10:47:40.564858       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0731 10:47:40.564944       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0731 10:47:40.567905       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0731 10:47:40.569630       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 10:47:40.569663       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 10:47:40.570202       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0731 10:47:40.579689       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 10:47:40.583655       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 10:47:40.584102       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 10:47:40.584385       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 10:47:40.584600       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 10:47:40.584944       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 10:47:40.585746       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 10:47:40.586010       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 10:47:40.586284       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 10:47:40.586518       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 10:47:40.588637       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 10:47:40.589074       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 10:47:41.446779       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 10:47:41.500053       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 10:47:41.578926       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 10:47:41.585145       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0731 10:47:43.770140       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0731 10:47:59.988021       1 factory.go:503] pod: kube-system/coredns-66bff467f8-fqqb5 is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Jul 31 10:48:54 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:48:54.983617    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9d513cc1f9943185feb9d431b087842c36224cf3efe10b210f1b2852b0f9e924
	Jul 31 10:48:54 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:48:54.984122    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e795a9dbda8358d50d630e24844ae97bb016d62ccdd476907f98cf33fe4f07ee
	Jul 31 10:48:54 ingress-addon-legacy-947999 kubelet[1635]: E0731 10:48:54.984519    1635 pod_workers.go:191] Error syncing pod c0e8ac63-705a-4521-ace7-3ddf81df2893 ("hello-world-app-5f5d8b66bb-8tz2j_default(c0e8ac63-705a-4521-ace7-3ddf81df2893)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-8tz2j_default(c0e8ac63-705a-4521-ace7-3ddf81df2893)"
	Jul 31 10:48:55 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:48:55.986949    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e795a9dbda8358d50d630e24844ae97bb016d62ccdd476907f98cf33fe4f07ee
	Jul 31 10:48:55 ingress-addon-legacy-947999 kubelet[1635]: E0731 10:48:55.987628    1635 pod_workers.go:191] Error syncing pod c0e8ac63-705a-4521-ace7-3ddf81df2893 ("hello-world-app-5f5d8b66bb-8tz2j_default(c0e8ac63-705a-4521-ace7-3ddf81df2893)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-8tz2j_default(c0e8ac63-705a-4521-ace7-3ddf81df2893)"
	Jul 31 10:49:00 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:00.741913    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a40401f13ac388046ba64c9535266e9c72f8361009808bec1dee01c137c048e6
	Jul 31 10:49:00 ingress-addon-legacy-947999 kubelet[1635]: E0731 10:49:00.742681    1635 pod_workers.go:191] Error syncing pod 209d5b49-ae91-433d-89d1-6f3e79990c86 ("kube-ingress-dns-minikube_kube-system(209d5b49-ae91-433d-89d1-6f3e79990c86)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(209d5b49-ae91-433d-89d1-6f3e79990c86)"
	Jul 31 10:49:07 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:07.220827    1635 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-x2dcf" (UniqueName: "kubernetes.io/secret/209d5b49-ae91-433d-89d1-6f3e79990c86-minikube-ingress-dns-token-x2dcf") pod "209d5b49-ae91-433d-89d1-6f3e79990c86" (UID: "209d5b49-ae91-433d-89d1-6f3e79990c86")
	Jul 31 10:49:07 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:07.224918    1635 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/209d5b49-ae91-433d-89d1-6f3e79990c86-minikube-ingress-dns-token-x2dcf" (OuterVolumeSpecName: "minikube-ingress-dns-token-x2dcf") pod "209d5b49-ae91-433d-89d1-6f3e79990c86" (UID: "209d5b49-ae91-433d-89d1-6f3e79990c86"). InnerVolumeSpecName "minikube-ingress-dns-token-x2dcf". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 10:49:07 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:07.321186    1635 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-x2dcf" (UniqueName: "kubernetes.io/secret/209d5b49-ae91-433d-89d1-6f3e79990c86-minikube-ingress-dns-token-x2dcf") on node "ingress-addon-legacy-947999" DevicePath ""
	Jul 31 10:49:09 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:09.012806    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a40401f13ac388046ba64c9535266e9c72f8361009808bec1dee01c137c048e6
	Jul 31 10:49:09 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:09.741756    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e795a9dbda8358d50d630e24844ae97bb016d62ccdd476907f98cf33fe4f07ee
	Jul 31 10:49:10 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:10.018654    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e795a9dbda8358d50d630e24844ae97bb016d62ccdd476907f98cf33fe4f07ee
	Jul 31 10:49:10 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:10.018997    1635 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7cf6da3d05fbd1efe53d4a6e19e0e9f42f2f204f84a734e3c20283bb37ae77ee
	Jul 31 10:49:10 ingress-addon-legacy-947999 kubelet[1635]: E0731 10:49:10.019253    1635 pod_workers.go:191] Error syncing pod c0e8ac63-705a-4521-ace7-3ddf81df2893 ("hello-world-app-5f5d8b66bb-8tz2j_default(c0e8ac63-705a-4521-ace7-3ddf81df2893)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-8tz2j_default(c0e8ac63-705a-4521-ace7-3ddf81df2893)"
	Jul 31 10:49:15 ingress-addon-legacy-947999 kubelet[1635]: E0731 10:49:15.645741    1635 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-rtkql.1776ee25a8e36f44", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-rtkql", UID:"ce4ec0c5-e52f-4447-981e-3c3bef96ee18", APIVersion:"v1", ResourceVersion:"481", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-947999"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12a024ae6148144, ext:92490805399, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12a024ae6148144, ext:92490805399, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-rtkql.1776ee25a8e36f44" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 31 10:49:15 ingress-addon-legacy-947999 kubelet[1635]: E0731 10:49:15.663173    1635 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-rtkql.1776ee25a8e36f44", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-rtkql", UID:"ce4ec0c5-e52f-4447-981e-3c3bef96ee18", APIVersion:"v1", ResourceVersion:"481", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-947999"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12a024ae6148144, ext:92490805399, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12a024ae71f8a0a, ext:92508305757, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-rtkql.1776ee25a8e36f44" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 31 10:49:17 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:17.849860    1635 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/ce4ec0c5-e52f-4447-981e-3c3bef96ee18-webhook-cert") pod "ce4ec0c5-e52f-4447-981e-3c3bef96ee18" (UID: "ce4ec0c5-e52f-4447-981e-3c3bef96ee18")
	Jul 31 10:49:17 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:17.849937    1635 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-tkp4s" (UniqueName: "kubernetes.io/secret/ce4ec0c5-e52f-4447-981e-3c3bef96ee18-ingress-nginx-token-tkp4s") pod "ce4ec0c5-e52f-4447-981e-3c3bef96ee18" (UID: "ce4ec0c5-e52f-4447-981e-3c3bef96ee18")
	Jul 31 10:49:17 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:17.855087    1635 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce4ec0c5-e52f-4447-981e-3c3bef96ee18-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ce4ec0c5-e52f-4447-981e-3c3bef96ee18" (UID: "ce4ec0c5-e52f-4447-981e-3c3bef96ee18"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 10:49:17 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:17.856935    1635 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce4ec0c5-e52f-4447-981e-3c3bef96ee18-ingress-nginx-token-tkp4s" (OuterVolumeSpecName: "ingress-nginx-token-tkp4s") pod "ce4ec0c5-e52f-4447-981e-3c3bef96ee18" (UID: "ce4ec0c5-e52f-4447-981e-3c3bef96ee18"). InnerVolumeSpecName "ingress-nginx-token-tkp4s". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 10:49:17 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:17.950250    1635 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/ce4ec0c5-e52f-4447-981e-3c3bef96ee18-webhook-cert") on node "ingress-addon-legacy-947999" DevicePath ""
	Jul 31 10:49:17 ingress-addon-legacy-947999 kubelet[1635]: I0731 10:49:17.950295    1635 reconciler.go:319] Volume detached for volume "ingress-nginx-token-tkp4s" (UniqueName: "kubernetes.io/secret/ce4ec0c5-e52f-4447-981e-3c3bef96ee18-ingress-nginx-token-tkp4s") on node "ingress-addon-legacy-947999" DevicePath ""
	Jul 31 10:49:18 ingress-addon-legacy-947999 kubelet[1635]: W0731 10:49:18.040748    1635 pod_container_deletor.go:77] Container "c7772112bf77881b4683935486e2a709856f66dfe2ab6de2b68456b6ad82d5f7" not found in pod's containers
	Jul 31 10:49:18 ingress-addon-legacy-947999 kubelet[1635]: W0731 10:49:18.748287    1635 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/ce4ec0c5-e52f-4447-981e-3c3bef96ee18/volumes" does not exist
	
	* 
	* ==> storage-provisioner [124b5774411c58be7040dd18500ec70607a98e5854e15165fe8fb79c4294436b] <==
	* I0731 10:48:03.518209       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 10:48:03.529945       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 10:48:03.530559       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 10:48:03.537256       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 10:48:03.537721       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3b1a718c-46f9-4857-8fc1-ff3a0c42067c", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-947999_ab5243fc-cd1b-4651-9fb6-d871867832e8 became leader
	I0731 10:48:03.537948       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-947999_ab5243fc-cd1b-4651-9fb6-d871867832e8!
	I0731 10:48:03.638608       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-947999_ab5243fc-cd1b-4651-9fb6-d871867832e8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-947999 -n ingress-addon-legacy-947999
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-947999 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (55.62s)

                                                
                                    
x
+
TestMissingContainerUpgrade (219.31s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
E0731 11:09:51.996144 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.22.0.1878123369.exe start -p missing-upgrade-953629 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.22.0.1878123369.exe start -p missing-upgrade-953629 --memory=2200 --driver=docker  --container-runtime=containerd: (1m36.066162316s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-953629
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-953629: (10.507716845s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-953629
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-953629 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:341: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-953629 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 90 (1m46.900622643s)

                                                
                                                
-- stdout --
	* [missing-upgrade-953629] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-953629 in cluster missing-upgrade-953629
	* Pulling base image ...
	* Downloading Kubernetes v1.21.2 preload ...
	* docker "missing-upgrade-953629" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:11:39.523113 3743291 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:11:39.523285 3743291 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:11:39.523290 3743291 out.go:309] Setting ErrFile to fd 2...
	I0731 11:11:39.523295 3743291 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:11:39.523550 3743291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
	I0731 11:11:39.523957 3743291 out.go:303] Setting JSON to false
	I0731 11:11:39.524974 3743291 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":68047,"bootTime":1690733853,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 11:11:39.525025 3743291 start.go:138] virtualization:  
	I0731 11:11:39.530596 3743291 out.go:177] * [missing-upgrade-953629] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 11:11:39.538710 3743291 notify.go:220] Checking for updates...
	I0731 11:11:39.542416 3743291 out.go:177]   - MINIKUBE_LOCATION=16969
	I0731 11:11:39.544341 3743291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:11:39.546451 3743291 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	I0731 11:11:39.548911 3743291 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	I0731 11:11:39.551220 3743291 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 11:11:39.553229 3743291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:11:39.555644 3743291 config.go:182] Loaded profile config "missing-upgrade-953629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0731 11:11:39.558367 3743291 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0731 11:11:39.561275 3743291 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:11:39.592329 3743291 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:11:39.592425 3743291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:11:39.723053 3743291 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2023-07-31 11:11:39.712991645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:11:39.723151 3743291 docker.go:294] overlay module found
	I0731 11:11:39.726573 3743291 out.go:177] * Using the docker driver based on existing profile
	I0731 11:11:39.728472 3743291 start.go:298] selected driver: docker
	I0731 11:11:39.728486 3743291 start.go:898] validating driver "docker" against &{Name:missing-upgrade-953629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:missing-upgrade-953629 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:11:39.728597 3743291 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:11:39.729228 3743291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:11:39.844557 3743291 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2023-07-31 11:11:39.834813584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:11:39.844864 3743291 cni.go:84] Creating CNI manager for ""
	I0731 11:11:39.844880 3743291 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0731 11:11:39.844892 3743291 start_flags.go:319] config:
	{Name:missing-upgrade-953629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:missing-upgrade-953629 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0}
	I0731 11:11:39.846937 3743291 out.go:177] * Starting control plane node missing-upgrade-953629 in cluster missing-upgrade-953629
	I0731 11:11:39.848643 3743291 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0731 11:11:39.850222 3743291 out.go:177] * Pulling base image ...
	I0731 11:11:39.852081 3743291 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0731 11:11:39.852241 3743291 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0731 11:11:39.871166 3743291 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0731 11:11:39.871193 3743291 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0731 11:11:39.920831 3743291 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.21.2/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4
	I0731 11:11:39.920857 3743291 cache.go:57] Caching tarball of preloaded images
	I0731 11:11:39.921006 3743291 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0731 11:11:39.923457 3743291 out.go:177] * Downloading Kubernetes v1.21.2 preload ...
	I0731 11:11:39.925295 3743291 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0731 11:11:40.061667 3743291 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.21.2/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:f1e1f7bdb5d08690c839f70306158850 -> /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4
	I0731 11:11:47.890342 3743291 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0731 11:11:47.890439 3743291 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0731 11:11:49.177931 3743291 cache.go:60] Finished verifying existence of preloaded tar for  v1.21.2 on containerd
	I0731 11:11:49.178136 3743291 profile.go:148] Saving config to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/missing-upgrade-953629/config.json ...
	I0731 11:11:49.179175 3743291 cache.go:195] Successfully downloaded all kic artifacts
	I0731 11:11:49.179276 3743291 start.go:365] acquiring machines lock for missing-upgrade-953629: {Name:mke84f7264f3c4a5d560a214bba5df2e7c818b65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:11:49.179388 3743291 start.go:369] acquired machines lock for "missing-upgrade-953629" in 52.8µs
	I0731 11:11:49.179435 3743291 start.go:96] Skipping create...Using existing machine configuration
	I0731 11:11:49.179461 3743291 fix.go:54] fixHost starting: 
	I0731 11:11:49.179761 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:11:49.198580 3743291 cli_runner.go:211] docker container inspect missing-upgrade-953629 --format={{.State.Status}} returned with exit code 1
	I0731 11:11:49.198635 3743291 fix.go:102] recreateIfNeeded on missing-upgrade-953629: state= err=unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:49.198652 3743291 fix.go:107] machineExists: false. err=machine does not exist
	I0731 11:11:49.234212 3743291 out.go:177] * docker "missing-upgrade-953629" container is missing, will recreate.
	I0731 11:11:49.262378 3743291 delete.go:124] DEMOLISHING missing-upgrade-953629 ...
	I0731 11:11:49.262482 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:11:49.283271 3743291 cli_runner.go:211] docker container inspect missing-upgrade-953629 --format={{.State.Status}} returned with exit code 1
	W0731 11:11:49.283336 3743291 stop.go:75] unable to get state: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:49.283354 3743291 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:49.283801 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:11:49.312645 3743291 cli_runner.go:211] docker container inspect missing-upgrade-953629 --format={{.State.Status}} returned with exit code 1
	I0731 11:11:49.312708 3743291 delete.go:82] Unable to get host status for missing-upgrade-953629, assuming it has already been deleted: state: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:49.312771 3743291 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-953629
	W0731 11:11:49.341498 3743291 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-953629 returned with exit code 1
	I0731 11:11:49.341537 3743291 kic.go:367] could not find the container missing-upgrade-953629 to remove it. will try anyways
	I0731 11:11:49.341594 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:11:49.370561 3743291 cli_runner.go:211] docker container inspect missing-upgrade-953629 --format={{.State.Status}} returned with exit code 1
	W0731 11:11:49.370640 3743291 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:49.370709 3743291 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-953629 /bin/bash -c "sudo init 0"
	W0731 11:11:49.402739 3743291 cli_runner.go:211] docker exec --privileged -t missing-upgrade-953629 /bin/bash -c "sudo init 0" returned with exit code 1
	I0731 11:11:49.402775 3743291 oci.go:647] error shutdown missing-upgrade-953629: docker exec --privileged -t missing-upgrade-953629 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:50.402967 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:11:50.427000 3743291 cli_runner.go:211] docker container inspect missing-upgrade-953629 --format={{.State.Status}} returned with exit code 1
	I0731 11:11:50.427062 3743291 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:50.427076 3743291 oci.go:661] temporary error: container missing-upgrade-953629 status is  but expect it to be exited
	I0731 11:11:50.427103 3743291 retry.go:31] will retry after 432.832716ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:50.860755 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:11:50.879780 3743291 cli_runner.go:211] docker container inspect missing-upgrade-953629 --format={{.State.Status}} returned with exit code 1
	I0731 11:11:50.879836 3743291 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:50.879844 3743291 oci.go:661] temporary error: container missing-upgrade-953629 status is  but expect it to be exited
	I0731 11:11:50.879872 3743291 retry.go:31] will retry after 622.485095ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:51.503163 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:11:51.558918 3743291 cli_runner.go:211] docker container inspect missing-upgrade-953629 --format={{.State.Status}} returned with exit code 1
	I0731 11:11:51.558976 3743291 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:51.558985 3743291 oci.go:661] temporary error: container missing-upgrade-953629 status is  but expect it to be exited
	I0731 11:11:51.559009 3743291 retry.go:31] will retry after 677.247573ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:52.236439 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:11:52.278894 3743291 cli_runner.go:211] docker container inspect missing-upgrade-953629 --format={{.State.Status}} returned with exit code 1
	I0731 11:11:52.278954 3743291 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:52.278967 3743291 oci.go:661] temporary error: container missing-upgrade-953629 status is  but expect it to be exited
	I0731 11:11:52.278991 3743291 retry.go:31] will retry after 1.580099825s: couldn't verify container is exited. %v: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:53.859324 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:11:53.879660 3743291 cli_runner.go:211] docker container inspect missing-upgrade-953629 --format={{.State.Status}} returned with exit code 1
	I0731 11:11:53.879719 3743291 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:53.879734 3743291 oci.go:661] temporary error: container missing-upgrade-953629 status is  but expect it to be exited
	I0731 11:11:53.879759 3743291 retry.go:31] will retry after 3.74325627s: couldn't verify container is exited. %v: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:57.625215 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:11:57.651929 3743291 cli_runner.go:211] docker container inspect missing-upgrade-953629 --format={{.State.Status}} returned with exit code 1
	I0731 11:11:57.651985 3743291 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:57.651995 3743291 oci.go:661] temporary error: container missing-upgrade-953629 status is  but expect it to be exited
	I0731 11:11:57.652018 3743291 retry.go:31] will retry after 2.127883048s: couldn't verify container is exited. %v: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:59.780859 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:11:59.803020 3743291 cli_runner.go:211] docker container inspect missing-upgrade-953629 --format={{.State.Status}} returned with exit code 1
	I0731 11:11:59.803081 3743291 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:11:59.803093 3743291 oci.go:661] temporary error: container missing-upgrade-953629 status is  but expect it to be exited
	I0731 11:11:59.803117 3743291 retry.go:31] will retry after 2.857578476s: couldn't verify container is exited. %v: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:12:02.661258 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:12:02.694655 3743291 cli_runner.go:211] docker container inspect missing-upgrade-953629 --format={{.State.Status}} returned with exit code 1
	I0731 11:12:02.694715 3743291 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:12:02.694735 3743291 oci.go:661] temporary error: container missing-upgrade-953629 status is  but expect it to be exited
	I0731 11:12:02.694758 3743291 retry.go:31] will retry after 6.832793964s: couldn't verify container is exited. %v: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:12:09.527837 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:12:09.543898 3743291 cli_runner.go:211] docker container inspect missing-upgrade-953629 --format={{.State.Status}} returned with exit code 1
	I0731 11:12:09.543955 3743291 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	I0731 11:12:09.543968 3743291 oci.go:661] temporary error: container missing-upgrade-953629 status is  but expect it to be exited
	I0731 11:12:09.544002 3743291 oci.go:88] couldn't shut down missing-upgrade-953629 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-953629": docker container inspect missing-upgrade-953629 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-953629
	 
	I0731 11:12:09.544065 3743291 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-953629
	I0731 11:12:09.560839 3743291 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-953629
	W0731 11:12:09.577287 3743291 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-953629 returned with exit code 1
	I0731 11:12:09.577378 3743291 cli_runner.go:164] Run: docker network inspect missing-upgrade-953629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:12:09.594340 3743291 cli_runner.go:164] Run: docker network rm missing-upgrade-953629
	I0731 11:12:09.700880 3743291 fix.go:114] Sleeping 1 second for extra luck!
	I0731 11:12:10.701017 3743291 start.go:125] createHost starting for "" (driver="docker")
	I0731 11:12:10.704643 3743291 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0731 11:12:10.704774 3743291 start.go:159] libmachine.API.Create for "missing-upgrade-953629" (driver="docker")
	I0731 11:12:10.704797 3743291 client.go:168] LocalClient.Create starting
	I0731 11:12:10.704883 3743291 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem
	I0731 11:12:10.704921 3743291 main.go:141] libmachine: Decoding PEM data...
	I0731 11:12:10.704943 3743291 main.go:141] libmachine: Parsing certificate...
	I0731 11:12:10.705007 3743291 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem
	I0731 11:12:10.705031 3743291 main.go:141] libmachine: Decoding PEM data...
	I0731 11:12:10.705044 3743291 main.go:141] libmachine: Parsing certificate...
	I0731 11:12:10.705326 3743291 cli_runner.go:164] Run: docker network inspect missing-upgrade-953629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 11:12:10.721783 3743291 cli_runner.go:211] docker network inspect missing-upgrade-953629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 11:12:10.721866 3743291 network_create.go:281] running [docker network inspect missing-upgrade-953629] to gather additional debugging logs...
	I0731 11:12:10.721885 3743291 cli_runner.go:164] Run: docker network inspect missing-upgrade-953629
	W0731 11:12:10.742769 3743291 cli_runner.go:211] docker network inspect missing-upgrade-953629 returned with exit code 1
	I0731 11:12:10.742800 3743291 network_create.go:284] error running [docker network inspect missing-upgrade-953629]: docker network inspect missing-upgrade-953629: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-953629 not found
	I0731 11:12:10.742813 3743291 network_create.go:286] output of [docker network inspect missing-upgrade-953629]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-953629 not found
	
	** /stderr **
	I0731 11:12:10.742904 3743291 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:12:10.770998 3743291 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ab16070d357b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:07:d3:36:48} reservation:<nil>}
	I0731 11:12:10.771393 3743291 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-78b0e162f9c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:0f:3e:1c:b0} reservation:<nil>}
	I0731 11:12:10.771759 3743291 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aea68ac414ae IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:b5:3b:44:6e} reservation:<nil>}
	I0731 11:12:10.772192 3743291 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000052c60}
	I0731 11:12:10.772215 3743291 network_create.go:123] attempt to create docker network missing-upgrade-953629 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0731 11:12:10.772280 3743291 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-953629 missing-upgrade-953629
	I0731 11:12:10.849490 3743291 network_create.go:107] docker network missing-upgrade-953629 192.168.76.0/24 created
	I0731 11:12:10.849521 3743291 kic.go:117] calculated static IP "192.168.76.2" for the "missing-upgrade-953629" container
	I0731 11:12:10.849593 3743291 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 11:12:10.866295 3743291 cli_runner.go:164] Run: docker volume create missing-upgrade-953629 --label name.minikube.sigs.k8s.io=missing-upgrade-953629 --label created_by.minikube.sigs.k8s.io=true
	I0731 11:12:10.886880 3743291 oci.go:103] Successfully created a docker volume missing-upgrade-953629
	I0731 11:12:10.886959 3743291 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-953629-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-953629 --entrypoint /usr/bin/test -v missing-upgrade-953629:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0731 11:12:11.874176 3743291 oci.go:107] Successfully prepared a docker volume missing-upgrade-953629
	I0731 11:12:11.874206 3743291 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0731 11:12:11.874254 3743291 kic.go:190] Starting extracting preloaded images to volume ...
	I0731 11:12:11.874338 3743291 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-953629:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 11:12:22.247982 3743291 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-953629:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (10.373598436s)
	I0731 11:12:22.248010 3743291 kic.go:199] duration metric: took 10.373781 seconds to extract preloaded images to volume
	W0731 11:12:22.248147 3743291 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 11:12:22.248250 3743291 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 11:12:22.382702 3743291 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-953629 --name missing-upgrade-953629 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-953629 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-953629 --network missing-upgrade-953629 --ip 192.168.76.2 --volume missing-upgrade-953629:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0731 11:12:22.875270 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Running}}
	I0731 11:12:22.913941 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	I0731 11:12:22.940788 3743291 cli_runner.go:164] Run: docker exec missing-upgrade-953629 stat /var/lib/dpkg/alternatives/iptables
	I0731 11:12:23.021385 3743291 oci.go:144] the created container "missing-upgrade-953629" has a running status.
	I0731 11:12:23.021413 3743291 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa...
	I0731 11:12:23.461895 3743291 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 11:12:23.495558 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	I0731 11:12:23.531864 3743291 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 11:12:23.531888 3743291 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-953629 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 11:12:23.642988 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	I0731 11:12:23.689206 3743291 machine.go:88] provisioning docker machine ...
	I0731 11:12:23.689236 3743291 ubuntu.go:169] provisioning hostname "missing-upgrade-953629"
	I0731 11:12:23.689306 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:23.720046 3743291 main.go:141] libmachine: Using SSH client type: native
	I0731 11:12:23.720489 3743291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35523 <nil> <nil>}
	I0731 11:12:23.720501 3743291 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-953629 && echo "missing-upgrade-953629" | sudo tee /etc/hostname
	I0731 11:12:23.721045 3743291 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48080->127.0.0.1:35523: read: connection reset by peer
	I0731 11:12:26.915193 3743291 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-953629
	
	I0731 11:12:26.915323 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:26.949846 3743291 main.go:141] libmachine: Using SSH client type: native
	I0731 11:12:26.950291 3743291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35523 <nil> <nil>}
	I0731 11:12:26.950316 3743291 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-953629' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-953629/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-953629' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 11:12:27.102727 3743291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 11:12:27.102755 3743291 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16969-3616075/.minikube CaCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16969-3616075/.minikube}
	I0731 11:12:27.102807 3743291 ubuntu.go:177] setting up certificates
	I0731 11:12:27.102817 3743291 provision.go:83] configureAuth start
	I0731 11:12:27.102916 3743291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-953629
	I0731 11:12:27.130392 3743291 provision.go:138] copyHostCerts
	I0731 11:12:27.130465 3743291 exec_runner.go:144] found /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem, removing ...
	I0731 11:12:27.130480 3743291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem
	I0731 11:12:27.130555 3743291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem (1679 bytes)
	I0731 11:12:27.130645 3743291 exec_runner.go:144] found /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem, removing ...
	I0731 11:12:27.130655 3743291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem
	I0731 11:12:27.130680 3743291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem (1082 bytes)
	I0731 11:12:27.130735 3743291 exec_runner.go:144] found /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem, removing ...
	I0731 11:12:27.130745 3743291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem
	I0731 11:12:27.130767 3743291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem (1123 bytes)
	I0731 11:12:27.130812 3743291 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-953629 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-953629]
	I0731 11:12:27.627102 3743291 provision.go:172] copyRemoteCerts
	I0731 11:12:27.627210 3743291 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 11:12:27.627280 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:27.646723 3743291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35523 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa Username:docker}
	I0731 11:12:27.755770 3743291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 11:12:27.789923 3743291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 11:12:27.834889 3743291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 11:12:27.874082 3743291 provision.go:86] duration metric: configureAuth took 771.246462ms
	I0731 11:12:27.874112 3743291 ubuntu.go:193] setting minikube options for container-runtime
	I0731 11:12:27.874357 3743291 config.go:182] Loaded profile config "missing-upgrade-953629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0731 11:12:27.874370 3743291 machine.go:91] provisioned docker machine in 4.185148888s
	I0731 11:12:27.874377 3743291 client.go:171] LocalClient.Create took 17.169574566s
	I0731 11:12:27.874402 3743291 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-953629" took 17.169627809s
	I0731 11:12:27.874416 3743291 start.go:300] post-start starting for "missing-upgrade-953629" (driver="docker")
	I0731 11:12:27.874437 3743291 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 11:12:27.874522 3743291 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 11:12:27.874593 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:27.922655 3743291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35523 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa Username:docker}
	I0731 11:12:28.028070 3743291 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 11:12:28.032723 3743291 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 11:12:28.032754 3743291 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 11:12:28.032766 3743291 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 11:12:28.032773 3743291 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0731 11:12:28.032783 3743291 filesync.go:126] Scanning /home/jenkins/minikube-integration/16969-3616075/.minikube/addons for local assets ...
	I0731 11:12:28.032838 3743291 filesync.go:126] Scanning /home/jenkins/minikube-integration/16969-3616075/.minikube/files for local assets ...
	I0731 11:12:28.032930 3743291 filesync.go:149] local asset: /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem -> 36214032.pem in /etc/ssl/certs
	I0731 11:12:28.033042 3743291 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 11:12:28.042452 3743291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem --> /etc/ssl/certs/36214032.pem (1708 bytes)
	I0731 11:12:28.077158 3743291 start.go:303] post-start completed in 202.715457ms
	I0731 11:12:28.077571 3743291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-953629
	I0731 11:12:28.112525 3743291 profile.go:148] Saving config to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/missing-upgrade-953629/config.json ...
	I0731 11:12:28.112788 3743291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:12:28.112839 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:28.154503 3743291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35523 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa Username:docker}
	I0731 11:12:28.262970 3743291 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 11:12:28.269456 3743291 start.go:128] duration metric: createHost completed in 17.568408122s
	I0731 11:12:28.269557 3743291 cli_runner.go:164] Run: docker container inspect missing-upgrade-953629 --format={{.State.Status}}
	W0731 11:12:28.306576 3743291 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 11:12:28.306603 3743291 machine.go:88] provisioning docker machine ...
	I0731 11:12:28.306620 3743291 ubuntu.go:169] provisioning hostname "missing-upgrade-953629"
	I0731 11:12:28.306697 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:28.348804 3743291 main.go:141] libmachine: Using SSH client type: native
	I0731 11:12:28.349266 3743291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35523 <nil> <nil>}
	I0731 11:12:28.349286 3743291 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-953629 && echo "missing-upgrade-953629" | sudo tee /etc/hostname
	I0731 11:12:28.513665 3743291 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-953629
	
	I0731 11:12:28.513758 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:28.540675 3743291 main.go:141] libmachine: Using SSH client type: native
	I0731 11:12:28.541159 3743291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35523 <nil> <nil>}
	I0731 11:12:28.541182 3743291 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-953629' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-953629/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-953629' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 11:12:28.686273 3743291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 11:12:28.686298 3743291 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16969-3616075/.minikube CaCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16969-3616075/.minikube}
	I0731 11:12:28.686315 3743291 ubuntu.go:177] setting up certificates
	I0731 11:12:28.686323 3743291 provision.go:83] configureAuth start
	I0731 11:12:28.686400 3743291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-953629
	I0731 11:12:28.751476 3743291 provision.go:138] copyHostCerts
	I0731 11:12:28.751554 3743291 exec_runner.go:144] found /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem, removing ...
	I0731 11:12:28.751562 3743291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem
	I0731 11:12:28.751635 3743291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem (1082 bytes)
	I0731 11:12:28.751716 3743291 exec_runner.go:144] found /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem, removing ...
	I0731 11:12:28.751721 3743291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem
	I0731 11:12:28.751747 3743291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem (1123 bytes)
	I0731 11:12:28.751794 3743291 exec_runner.go:144] found /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem, removing ...
	I0731 11:12:28.751798 3743291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem
	I0731 11:12:28.751820 3743291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem (1679 bytes)
	I0731 11:12:28.751862 3743291 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-953629 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-953629]
	I0731 11:12:29.872378 3743291 provision.go:172] copyRemoteCerts
	I0731 11:12:29.872474 3743291 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 11:12:29.872525 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:29.899570 3743291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35523 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa Username:docker}
	I0731 11:12:30.012268 3743291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 11:12:30.044678 3743291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 11:12:30.072114 3743291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 11:12:30.107900 3743291 provision.go:86] duration metric: configureAuth took 1.421564084s
	I0731 11:12:30.107939 3743291 ubuntu.go:193] setting minikube options for container-runtime
	I0731 11:12:30.108150 3743291 config.go:182] Loaded profile config "missing-upgrade-953629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0731 11:12:30.108169 3743291 machine.go:91] provisioned docker machine in 1.801559946s
	I0731 11:12:30.108177 3743291 start.go:300] post-start starting for "missing-upgrade-953629" (driver="docker")
	I0731 11:12:30.108190 3743291 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 11:12:30.108260 3743291 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 11:12:30.108314 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:30.145900 3743291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35523 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa Username:docker}
	I0731 11:12:30.251521 3743291 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 11:12:30.258969 3743291 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 11:12:30.258999 3743291 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 11:12:30.259014 3743291 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 11:12:30.259021 3743291 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0731 11:12:30.259031 3743291 filesync.go:126] Scanning /home/jenkins/minikube-integration/16969-3616075/.minikube/addons for local assets ...
	I0731 11:12:30.259079 3743291 filesync.go:126] Scanning /home/jenkins/minikube-integration/16969-3616075/.minikube/files for local assets ...
	I0731 11:12:30.259152 3743291 filesync.go:149] local asset: /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem -> 36214032.pem in /etc/ssl/certs
	I0731 11:12:30.259257 3743291 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 11:12:30.276418 3743291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem --> /etc/ssl/certs/36214032.pem (1708 bytes)
	I0731 11:12:30.315306 3743291 start.go:303] post-start completed in 207.109601ms
	I0731 11:12:30.315450 3743291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:12:30.315521 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:30.343635 3743291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35523 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa Username:docker}
	I0731 11:12:30.438177 3743291 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 11:12:30.444523 3743291 fix.go:56] fixHost completed within 41.26505576s
	I0731 11:12:30.444543 3743291 start.go:83] releasing machines lock for "missing-upgrade-953629", held for 41.265118398s
	I0731 11:12:30.444610 3743291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-953629
	I0731 11:12:30.470586 3743291 ssh_runner.go:195] Run: cat /version.json
	I0731 11:12:30.470636 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:30.470924 3743291 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 11:12:30.470977 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:30.496642 3743291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35523 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa Username:docker}
	I0731 11:12:30.497216 3743291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35523 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa Username:docker}
	W0731 11:12:30.589817 3743291 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 11:12:30.589902 3743291 ssh_runner.go:195] Run: systemctl --version
	I0731 11:12:30.732395 3743291 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 11:12:30.738458 3743291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0731 11:12:30.780829 3743291 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0731 11:12:30.780952 3743291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:12:30.818823 3743291 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 11:12:30.818887 3743291 start.go:466] detecting cgroup driver to use...
	I0731 11:12:30.818932 3743291 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 11:12:30.819008 3743291 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 11:12:30.834770 3743291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 11:12:30.855110 3743291 docker.go:196] disabling cri-docker service (if available) ...
	I0731 11:12:30.855226 3743291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 11:12:30.872790 3743291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 11:12:30.887238 3743291 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0731 11:12:30.900988 3743291 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0731 11:12:30.901087 3743291 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 11:12:31.040461 3743291 docker.go:212] disabling docker service ...
	I0731 11:12:31.040540 3743291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 11:12:31.083594 3743291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 11:12:31.102110 3743291 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 11:12:31.246041 3743291 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 11:12:31.370769 3743291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 11:12:31.384206 3743291 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 11:12:31.402344 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.4.1"|' /etc/containerd/config.toml"
	I0731 11:12:31.414540 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 11:12:31.425306 3743291 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 11:12:31.425374 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 11:12:31.438601 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 11:12:31.449887 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 11:12:31.462516 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 11:12:31.473996 3743291 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 11:12:31.486234 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 11:12:31.497497 3743291 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 11:12:31.508022 3743291 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 11:12:31.518381 3743291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 11:12:31.654448 3743291 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 11:12:31.782787 3743291 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0731 11:12:31.782855 3743291 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0731 11:12:31.787627 3743291 start.go:534] Will wait 60s for crictl version
	I0731 11:12:31.787688 3743291 ssh_runner.go:195] Run: which crictl
	I0731 11:12:31.793074 3743291 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 11:12:31.840090 3743291 retry.go:31] will retry after 11.964226651s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:12:31Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0731 11:12:43.804521 3743291 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 11:12:43.850640 3743291 retry.go:31] will retry after 20.262975035s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:12:43Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0731 11:13:04.113857 3743291 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 11:13:04.142683 3743291 retry.go:31] will retry after 22.071455118s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0731 11:13:26.215081 3743291 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 11:13:26.250326 3743291 out.go:177] 
	W0731 11:13:26.252059 3743291 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:26Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:26Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0731 11:13:26.252075 3743291 out.go:239] * 
	* 
	W0731 11:13:26.255257 3743291 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:13:26.257485 3743291 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:343: failed missing container upgrade from v1.22.0. args: out/minikube-linux-arm64 start -p missing-upgrade-953629 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 90
version_upgrade_test.go:345: *** TestMissingContainerUpgrade FAILED at 2023-07-31 11:13:26.336880712 +0000 UTC m=+2140.797908435
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-953629
helpers_test.go:235: (dbg) docker inspect missing-upgrade-953629:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ea6abeff2a3a5c0bd29ca2ba1cdec0d3f58563733ab12de1d5cf4ec86ccbd663",
	        "Created": "2023-07-31T11:12:22.412571992Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3745724,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T11:12:22.867717283Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/ea6abeff2a3a5c0bd29ca2ba1cdec0d3f58563733ab12de1d5cf4ec86ccbd663/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ea6abeff2a3a5c0bd29ca2ba1cdec0d3f58563733ab12de1d5cf4ec86ccbd663/hostname",
	        "HostsPath": "/var/lib/docker/containers/ea6abeff2a3a5c0bd29ca2ba1cdec0d3f58563733ab12de1d5cf4ec86ccbd663/hosts",
	        "LogPath": "/var/lib/docker/containers/ea6abeff2a3a5c0bd29ca2ba1cdec0d3f58563733ab12de1d5cf4ec86ccbd663/ea6abeff2a3a5c0bd29ca2ba1cdec0d3f58563733ab12de1d5cf4ec86ccbd663-json.log",
	        "Name": "/missing-upgrade-953629",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-953629:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-953629",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c33e5a7e6e0294db636d99e2c89b21d79bcdab62c12ffab12515346a8736b841-init/diff:/var/lib/docker/overlay2/488f868ce2a7775f7d809bb34dc8c2a9636c7c45b7ad969584d533c1dec1d7a1/diff:/var/lib/docker/overlay2/7781792749c49e43158fc62a06e6ff22efed06091ea7f4e867d4eb128179e422/diff:/var/lib/docker/overlay2/3466f07b49758bf9099678488ee85dd5595ea8d9042d33d1e6b4ed600b188f38/diff:/var/lib/docker/overlay2/f40ae81dd8ca622e94ec2f5f21b6949cc2d867fe60e5da416ea4d3d7596cc6ad/diff:/var/lib/docker/overlay2/7b6241b782f17c436cad83ddd5ea945c991fec155392aad3bf45906665fe799d/diff:/var/lib/docker/overlay2/ea3e3009d754f8e62dca869ba69de4355d1d7c4c74921db4d8d0a6f7dfd9561e/diff:/var/lib/docker/overlay2/0d8017ef1a3f16343a502ae0f747d82e7534b57395635f8b88311bd0d2b4473f/diff:/var/lib/docker/overlay2/8ce99ba9b8d194a4c54f4f706a6860ba69d3120d6e04c74e1eaf947351283f68/diff:/var/lib/docker/overlay2/6c9af6854e4430ea7d6be18d79ef99cbb68e9b861cbce86a7aad62ee8daeddfe/diff:/var/lib/docker/overlay2/e4eb70
ecdb3a74158345a6799b246caa020394950af035d3d8965d1ce92318a2/diff:/var/lib/docker/overlay2/fc782c8331b112cd002970480d7ee7bd269b7896ec855148c92422c309553b59/diff:/var/lib/docker/overlay2/c0161dd6157d296ac1d68c02b8b7b26fed6893c521a2ef845fc3076dfdb4a437/diff:/var/lib/docker/overlay2/457a06c2956f7ade8f40525310235c95b9d1ba5021564c0f309ad356dc76c6c6/diff:/var/lib/docker/overlay2/66a46d6982eee0d301badce6039524beac8bddd46feb5f9aedffeeb3c207d0a8/diff:/var/lib/docker/overlay2/5c7af02943f3aa339ce3f70197271b30431e61702de7c4af169b550c11e717a6/diff:/var/lib/docker/overlay2/e5db2bb72a72aa78a74df1e54b93dce2075cbac624615f082c558b170b5a399c/diff:/var/lib/docker/overlay2/f021e6c1087fd55191b834f7fedfb0a5faa81c322a44244c19a304f4f1cac1b2/diff:/var/lib/docker/overlay2/ba20b6e93f71f3849ac72d1571fd76ed498a9054ad6f9ea00f64a4c794cda2ef/diff:/var/lib/docker/overlay2/5091c539196b601c6a379abe4d43272c181e52506141a8ef2b7b7e5fc87b25f8/diff:/var/lib/docker/overlay2/37c8ef695371f1f24b48ff6852dfd661d8370e39c2e63a0f62306d7e31209a8f/diff:/var/lib/d
ocker/overlay2/185daa922a86886015632843c898cf58619390b096fb6604813dfee3db78ac69/diff:/var/lib/docker/overlay2/ce8623793b11898657fdb1bee7511525e194d6893e90ae6fddecf8652725fad5/diff:/var/lib/docker/overlay2/6e54c45fb5d469c7380e321ba695b885788c8f91899f73361b0e88685092aa15/diff:/var/lib/docker/overlay2/72e7f9943ba91c74ca9db935ebc46f14d6751e2ddf1c00f7d850d6a78b581186/diff:/var/lib/docker/overlay2/bc409bac6233057ed14eb8eb437cb53f60d2eb5d50e5075b4370165f5516bfaf/diff:/var/lib/docker/overlay2/c859d8416f05055edf7e05c377f8ae93f56ff92085e02d078b30dc4534ecf1b9/diff:/var/lib/docker/overlay2/6aebe502b3c1a1c2c0d237446ad3819836c546e70682e2084e43444384df5809/diff:/var/lib/docker/overlay2/31684fc1501dbfaa8a1c2811acf8392582813dd4d702857eb4bd340c696a4286/diff:/var/lib/docker/overlay2/37b9e72aceb8b53e3be0dda4ec193adbe30799ad0d12b418fa244bca6fd61227/diff:/var/lib/docker/overlay2/e44a90117a02effa0281c1090e0eea9862e80646aa9a00146c7b014526e60489/diff:/var/lib/docker/overlay2/ed54eb5133e220c01a0fd07fa6a97ea0d783b0fe44854e95b4c1c1e40ac
58806/diff:/var/lib/docker/overlay2/7407b1cacec80557d4d8786cd00a407b72429a6cb699972297ef277a8e370522/diff:/var/lib/docker/overlay2/d7382f56d779f0548fe3590592c0f91306bf9a40c5d9e82069936f9fbb6f0de2/diff:/var/lib/docker/overlay2/887c3e094204c931bdfc8a591cf6cbf0827f7c1f318d28bbd3abeed7cafeaea3/diff:/var/lib/docker/overlay2/b42945b0a053cdfcf30419eed2f1d173ba4250b998fd486bf3fbf91ee1b1c0c6/diff:/var/lib/docker/overlay2/cd15a883ca79a5e50026e792f5a679770266837b1a81710335954a5abe5537ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c33e5a7e6e0294db636d99e2c89b21d79bcdab62c12ffab12515346a8736b841/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c33e5a7e6e0294db636d99e2c89b21d79bcdab62c12ffab12515346a8736b841/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c33e5a7e6e0294db636d99e2c89b21d79bcdab62c12ffab12515346a8736b841/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-953629",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-953629/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-953629",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-953629",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-953629",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1b36eb7d6db0dbe5fc60e21688d015adc42f14e5a9afe8c8045d9fc39dec4821",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35522"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35519"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35521"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35520"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1b36eb7d6db0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-953629": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ea6abeff2a3a",
	                        "missing-upgrade-953629"
	                    ],
	                    "NetworkID": "49fd97598700325bdb81a489b0881489916922f580baf446c672fe2b2110133d",
	                    "EndpointID": "f0776857938fe8b68423dcb1a41f70e5460b87791aa2bfb2b197024998530f68",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-953629 -n missing-upgrade-953629
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-953629 -n missing-upgrade-953629: exit status 2 (456.833916ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMissingContainerUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMissingContainerUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p missing-upgrade-953629 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p missing-upgrade-953629 logs -n 25: (1.44358648s)
helpers_test.go:252: TestMissingContainerUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |              Args              |          Profile          |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------|---------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p pause-797340 --memory=2048  | pause-797340              | jenkins | v1.31.1 | 31 Jul 23 11:08 UTC           | 31 Jul 23 11:09 UTC           |
	|         | --install-addons=false         |                           |         |         |                               |                               |
	|         | --wait=all --driver=docker     |                           |         |         |                               |                               |
	|         | --container-runtime=containerd |                           |         |         |                               |                               |
	| start   | -p NoKubernetes-810140         | NoKubernetes-810140       | jenkins | v1.31.1 | 31 Jul 23 11:08 UTC           |                               |
	|         | --no-kubernetes                |                           |         |         |                               |                               |
	|         | --kubernetes-version=1.20      |                           |         |         |                               |                               |
	|         | --driver=docker                |                           |         |         |                               |                               |
	|         | --container-runtime=containerd |                           |         |         |                               |                               |
	| start   | -p NoKubernetes-810140         | NoKubernetes-810140       | jenkins | v1.31.1 | 31 Jul 23 11:08 UTC           | 31 Jul 23 11:09 UTC           |
	|         | --driver=docker                |                           |         |         |                               |                               |
	|         | --container-runtime=containerd |                           |         |         |                               |                               |
	| start   | -p NoKubernetes-810140         | NoKubernetes-810140       | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           | 31 Jul 23 11:09 UTC           |
	|         | --no-kubernetes                |                           |         |         |                               |                               |
	|         | --driver=docker                |                           |         |         |                               |                               |
	|         | --container-runtime=containerd |                           |         |         |                               |                               |
	| delete  | -p NoKubernetes-810140         | NoKubernetes-810140       | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           | 31 Jul 23 11:09 UTC           |
	| start   | -p NoKubernetes-810140         | NoKubernetes-810140       | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           | 31 Jul 23 11:09 UTC           |
	|         | --no-kubernetes                |                           |         |         |                               |                               |
	|         | --driver=docker                |                           |         |         |                               |                               |
	|         | --container-runtime=containerd |                           |         |         |                               |                               |
	| start   | -p pause-797340                | pause-797340              | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           | 31 Jul 23 11:09 UTC           |
	|         | --alsologtostderr              |                           |         |         |                               |                               |
	|         | -v=1 --driver=docker           |                           |         |         |                               |                               |
	|         | --container-runtime=containerd |                           |         |         |                               |                               |
	| ssh     | -p NoKubernetes-810140 sudo    | NoKubernetes-810140       | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           |                               |
	|         | systemctl is-active --quiet    |                           |         |         |                               |                               |
	|         | service kubelet                |                           |         |         |                               |                               |
	| stop    | -p NoKubernetes-810140         | NoKubernetes-810140       | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           | 31 Jul 23 11:09 UTC           |
	| start   | -p NoKubernetes-810140         | NoKubernetes-810140       | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           | 31 Jul 23 11:09 UTC           |
	|         | --driver=docker                |                           |         |         |                               |                               |
	|         | --container-runtime=containerd |                           |         |         |                               |                               |
	| ssh     | -p NoKubernetes-810140 sudo    | NoKubernetes-810140       | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           |                               |
	|         | systemctl is-active --quiet    |                           |         |         |                               |                               |
	|         | service kubelet                |                           |         |         |                               |                               |
	| delete  | -p NoKubernetes-810140         | NoKubernetes-810140       | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           | 31 Jul 23 11:09 UTC           |
	| pause   | -p pause-797340                | pause-797340              | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           | 31 Jul 23 11:09 UTC           |
	|         | --alsologtostderr -v=5         |                           |         |         |                               |                               |
	| unpause | -p pause-797340                | pause-797340              | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           | 31 Jul 23 11:09 UTC           |
	|         | --alsologtostderr -v=5         |                           |         |         |                               |                               |
	| pause   | -p pause-797340                | pause-797340              | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           | 31 Jul 23 11:09 UTC           |
	|         | --alsologtostderr -v=5         |                           |         |         |                               |                               |
	| delete  | -p pause-797340                | pause-797340              | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           | 31 Jul 23 11:09 UTC           |
	|         | --alsologtostderr -v=5         |                           |         |         |                               |                               |
	| delete  | -p pause-797340                | pause-797340              | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           | 31 Jul 23 11:09 UTC           |
	| start   | -p kubernetes-upgrade-700943   | kubernetes-upgrade-700943 | jenkins | v1.31.1 | 31 Jul 23 11:09 UTC           | 31 Jul 23 11:11 UTC           |
	|         | --memory=2200                  |                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                               |                               |
	|         | --alsologtostderr              |                           |         |         |                               |                               |
	|         | -v=1 --driver=docker           |                           |         |         |                               |                               |
	|         | --container-runtime=containerd |                           |         |         |                               |                               |
	| stop    | -p kubernetes-upgrade-700943   | kubernetes-upgrade-700943 | jenkins | v1.31.1 | 31 Jul 23 11:11 UTC           | 31 Jul 23 11:11 UTC           |
	| start   | -p kubernetes-upgrade-700943   | kubernetes-upgrade-700943 | jenkins | v1.31.1 | 31 Jul 23 11:11 UTC           | 31 Jul 23 11:12 UTC           |
	|         | --memory=2200                  |                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.27.3   |                           |         |         |                               |                               |
	|         | --alsologtostderr              |                           |         |         |                               |                               |
	|         | -v=1 --driver=docker           |                           |         |         |                               |                               |
	|         | --container-runtime=containerd |                           |         |         |                               |                               |
	| start   | -p missing-upgrade-953629      | missing-upgrade-953629    | jenkins | v1.22.0 | Mon, 31 Jul 2023 11:09:52 UTC | Mon, 31 Jul 2023 11:11:28 UTC |
	|         | --memory=2200 --driver=docker  |                           |         |         |                               |                               |
	|         | --container-runtime=containerd |                           |         |         |                               |                               |
	| start   | -p missing-upgrade-953629      | missing-upgrade-953629    | jenkins | v1.31.1 | 31 Jul 23 11:11 UTC           |                               |
	|         | --memory=2200                  |                           |         |         |                               |                               |
	|         | --alsologtostderr              |                           |         |         |                               |                               |
	|         | -v=1 --driver=docker           |                           |         |         |                               |                               |
	|         | --container-runtime=containerd |                           |         |         |                               |                               |
	| start   | -p kubernetes-upgrade-700943   | kubernetes-upgrade-700943 | jenkins | v1.31.1 | 31 Jul 23 11:12 UTC           |                               |
	|         | --memory=2200                  |                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                               |                               |
	|         | --driver=docker                |                           |         |         |                               |                               |
	|         | --container-runtime=containerd |                           |         |         |                               |                               |
	| start   | -p kubernetes-upgrade-700943   | kubernetes-upgrade-700943 | jenkins | v1.31.1 | 31 Jul 23 11:12 UTC           | 31 Jul 23 11:12 UTC           |
	|         | --memory=2200                  |                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.27.3   |                           |         |         |                               |                               |
	|         | --alsologtostderr              |                           |         |         |                               |                               |
	|         | -v=1 --driver=docker           |                           |         |         |                               |                               |
	|         | --container-runtime=containerd |                           |         |         |                               |                               |
	| delete  | -p kubernetes-upgrade-700943   | kubernetes-upgrade-700943 | jenkins | v1.31.1 | 31 Jul 23 11:12 UTC           | 31 Jul 23 11:12 UTC           |
	|---------|--------------------------------|---------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 11:12:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.16.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 11:12:33.372091 3747832 out.go:286] Setting OutFile to fd 1 ...
	I0731 11:12:33.372213 3747832 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0731 11:12:33.372216 3747832 out.go:299] Setting ErrFile to fd 2...
	I0731 11:12:33.372218 3747832 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0731 11:12:33.372357 3747832 root.go:312] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
	I0731 11:12:33.372639 3747832 out.go:293] Setting JSON to false
	I0731 11:12:33.373815 3747832 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":68101,"bootTime":1690733853,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 11:12:33.373881 3747832 start.go:121] virtualization:  
	I0731 11:12:33.376996 3747832 out.go:165] * [stopped-upgrade-585335] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0731 11:12:33.379280 3747832 out.go:165]   - MINIKUBE_LOCATION=16969
	I0731 11:12:33.380956 3747832 out.go:165]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:12:33.382694 3747832 out.go:165]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	I0731 11:12:33.384520 3747832 out.go:165]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 11:12:33.386128 3747832 out.go:165]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:12:33.388150 3747832 out.go:165]   - KUBECONFIG=/tmp/legacy_kubeconfig32154086
	I0731 11:12:33.388772 3747832 driver.go:335] Setting default libvirt URI to qemu:///system
	I0731 11:12:33.411727 3747832 docker.go:132] docker version: linux-24.0.5
	I0731 11:12:33.411851 3747832 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0731 11:12:33.483708 3747832 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-31 11:12:33.47430899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:12:33.483811 3747832 docker.go:244] overlay module found
	I0731 11:12:33.486083 3747832 out.go:165] * Using the docker driver based on user configuration
	I0731 11:12:33.486104 3747832 start.go:278] selected driver: docker
	I0731 11:12:33.486109 3747832 start.go:751] validating driver "docker" against <nil>
	I0731 11:12:33.486125 3747832 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0731 11:12:33.486164 3747832 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0731 11:12:33.486178 3747832 out.go:230] ! Your cgroup does not allow setting memory.
	I0731 11:12:33.487921 3747832 out.go:165]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0731 11:12:33.488268 3747832 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0731 11:12:33.553699 3747832 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-31 11:12:33.5438645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archite
cture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:12:33.553817 3747832 start_flags.go:261] no existing cluster config was found, will generate one from the flags 
	I0731 11:12:33.553955 3747832 start_flags.go:669] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 11:12:33.553966 3747832 cni.go:93] Creating CNI manager for ""
	I0731 11:12:33.553991 3747832 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0731 11:12:33.553998 3747832 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0731 11:12:33.554002 3747832 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0731 11:12:33.554006 3747832 start_flags.go:270] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 11:12:33.554011 3747832 start_flags.go:275] config:
	{Name:stopped-upgrade-585335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:stopped-upgrade-585335 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkP
lugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0731 11:12:33.556564 3747832 out.go:165] * Starting control plane node stopped-upgrade-585335 in cluster stopped-upgrade-585335
	I0731 11:12:33.556650 3747832 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0731 11:12:33.558776 3747832 out.go:165] * Pulling base image ...
	I0731 11:12:33.558812 3747832 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0731 11:12:33.558934 3747832 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0731 11:12:33.576697 3747832 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0731 11:12:33.576711 3747832 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0731 11:12:33.727728 3747832 preload.go:120] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-arm64.tar.lz4
	I0731 11:12:33.727743 3747832 cache.go:56] Caching tarball of preloaded images
	I0731 11:12:33.727941 3747832 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0731 11:12:29.872378 3743291 provision.go:172] copyRemoteCerts
	I0731 11:12:29.872474 3743291 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 11:12:29.872525 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:29.899570 3743291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35523 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa Username:docker}
	I0731 11:12:30.012268 3743291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 11:12:30.044678 3743291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 11:12:30.072114 3743291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 11:12:30.107900 3743291 provision.go:86] duration metric: configureAuth took 1.421564084s
	I0731 11:12:30.107939 3743291 ubuntu.go:193] setting minikube options for container-runtime
	I0731 11:12:30.108150 3743291 config.go:182] Loaded profile config "missing-upgrade-953629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0731 11:12:30.108169 3743291 machine.go:91] provisioned docker machine in 1.801559946s
	I0731 11:12:30.108177 3743291 start.go:300] post-start starting for "missing-upgrade-953629" (driver="docker")
	I0731 11:12:30.108190 3743291 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 11:12:30.108260 3743291 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 11:12:30.108314 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:30.145900 3743291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35523 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa Username:docker}
	I0731 11:12:30.251521 3743291 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 11:12:30.258969 3743291 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 11:12:30.258999 3743291 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 11:12:30.259014 3743291 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 11:12:30.259021 3743291 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0731 11:12:30.259031 3743291 filesync.go:126] Scanning /home/jenkins/minikube-integration/16969-3616075/.minikube/addons for local assets ...
	I0731 11:12:30.259079 3743291 filesync.go:126] Scanning /home/jenkins/minikube-integration/16969-3616075/.minikube/files for local assets ...
	I0731 11:12:30.259152 3743291 filesync.go:149] local asset: /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem -> 36214032.pem in /etc/ssl/certs
	I0731 11:12:30.259257 3743291 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 11:12:30.276418 3743291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem --> /etc/ssl/certs/36214032.pem (1708 bytes)
	I0731 11:12:30.315306 3743291 start.go:303] post-start completed in 207.109601ms
	I0731 11:12:30.315450 3743291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:12:30.315521 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:30.343635 3743291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35523 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa Username:docker}
	I0731 11:12:30.438177 3743291 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 11:12:30.444523 3743291 fix.go:56] fixHost completed within 41.26505576s
	I0731 11:12:30.444543 3743291 start.go:83] releasing machines lock for "missing-upgrade-953629", held for 41.265118398s
	I0731 11:12:30.444610 3743291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-953629
	I0731 11:12:30.470586 3743291 ssh_runner.go:195] Run: cat /version.json
	I0731 11:12:30.470636 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:30.470924 3743291 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 11:12:30.470977 3743291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-953629
	I0731 11:12:30.496642 3743291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35523 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa Username:docker}
	I0731 11:12:30.497216 3743291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35523 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/missing-upgrade-953629/id_rsa Username:docker}
	W0731 11:12:30.589817 3743291 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 11:12:30.589902 3743291 ssh_runner.go:195] Run: systemctl --version
	I0731 11:12:30.732395 3743291 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 11:12:30.738458 3743291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0731 11:12:30.780829 3743291 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0731 11:12:30.780952 3743291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:12:30.818823 3743291 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 11:12:30.818887 3743291 start.go:466] detecting cgroup driver to use...
	I0731 11:12:30.818932 3743291 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 11:12:30.819008 3743291 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 11:12:30.834770 3743291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 11:12:30.855110 3743291 docker.go:196] disabling cri-docker service (if available) ...
	I0731 11:12:30.855226 3743291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 11:12:30.872790 3743291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 11:12:30.887238 3743291 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0731 11:12:30.900988 3743291 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0731 11:12:30.901087 3743291 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 11:12:31.040461 3743291 docker.go:212] disabling docker service ...
	I0731 11:12:31.040540 3743291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 11:12:31.083594 3743291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 11:12:31.102110 3743291 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 11:12:31.246041 3743291 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 11:12:31.370769 3743291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 11:12:31.384206 3743291 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 11:12:31.402344 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.4.1"|' /etc/containerd/config.toml"
	I0731 11:12:31.414540 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 11:12:31.425306 3743291 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 11:12:31.425374 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 11:12:31.438601 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 11:12:31.449887 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 11:12:31.462516 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 11:12:31.473996 3743291 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 11:12:31.486234 3743291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 11:12:31.497497 3743291 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 11:12:31.508022 3743291 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 11:12:31.518381 3743291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 11:12:31.654448 3743291 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 11:12:31.782787 3743291 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0731 11:12:31.782855 3743291 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0731 11:12:31.787627 3743291 start.go:534] Will wait 60s for crictl version
	I0731 11:12:31.787688 3743291 ssh_runner.go:195] Run: which crictl
	I0731 11:12:31.793074 3743291 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 11:12:31.840090 3743291 retry.go:31] will retry after 11.964226651s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:12:31Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0731 11:12:33.729945 3747832 out.go:165] * Downloading Kubernetes v1.21.2 preload ...
	I0731 11:12:33.729982 3747832 preload.go:238] getting checksum for preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0731 11:12:33.844741 3747832 download.go:86] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:7a3594d34f28b7fbfc77b9c47d2641f4 -> /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-arm64.tar.lz4
	I0731 11:12:43.804521 3743291 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 11:12:43.850640 3743291 retry.go:31] will retry after 20.262975035s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:12:43Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0731 11:12:49.638608 3747832 preload.go:248] saving checksum for preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0731 11:12:49.639708 3747832 preload.go:255] verifying checksumm of /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0731 11:12:52.290886 3747832 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.2 on containerd
	I0731 11:12:52.291026 3747832 profile.go:148] Saving config to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/config.json ...
	I0731 11:12:52.291052 3747832 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/config.json: {Name:mk53ea2314d765dcf600888f289a68028ad7fa82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:12:52.291279 3747832 cache.go:205] Successfully downloaded all kic artifacts
	I0731 11:12:52.291299 3747832 start.go:313] acquiring machines lock for stopped-upgrade-585335: {Name:mk90e66d62e0466a96e21d59a6b35c28baf76d7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:12:52.291368 3747832 start.go:317] acquired machines lock for "stopped-upgrade-585335" in 59.463µs
	I0731 11:12:52.291386 3747832 start.go:89] Provisioning new machine with config: &{Name:stopped-upgrade-585335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:stopped-upgrade-585335 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0731 11:12:52.291454 3747832 start.go:126] createHost starting for "" (driver="docker")
	I0731 11:12:52.294187 3747832 out.go:192] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0731 11:12:52.294433 3747832 start.go:160] libmachine.API.Create for "stopped-upgrade-585335" (driver="docker")
	I0731 11:12:52.294464 3747832 client.go:168] LocalClient.Create starting
	I0731 11:12:52.294524 3747832 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem
	I0731 11:12:52.294571 3747832 main.go:130] libmachine: Decoding PEM data...
	I0731 11:12:52.294585 3747832 main.go:130] libmachine: Parsing certificate...
	I0731 11:12:52.294680 3747832 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem
	I0731 11:12:52.294694 3747832 main.go:130] libmachine: Decoding PEM data...
	I0731 11:12:52.294704 3747832 main.go:130] libmachine: Parsing certificate...
	I0731 11:12:52.295084 3747832 cli_runner.go:115] Run: docker network inspect stopped-upgrade-585335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 11:12:52.312232 3747832 cli_runner.go:162] docker network inspect stopped-upgrade-585335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 11:12:52.312309 3747832 network_create.go:255] running [docker network inspect stopped-upgrade-585335] to gather additional debugging logs...
	I0731 11:12:52.312325 3747832 cli_runner.go:115] Run: docker network inspect stopped-upgrade-585335
	W0731 11:12:52.330778 3747832 cli_runner.go:162] docker network inspect stopped-upgrade-585335 returned with exit code 1
	I0731 11:12:52.330818 3747832 network_create.go:258] error running [docker network inspect stopped-upgrade-585335]: docker network inspect stopped-upgrade-585335: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network stopped-upgrade-585335 not found
	I0731 11:12:52.330834 3747832 network_create.go:260] output of [docker network inspect stopped-upgrade-585335]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network stopped-upgrade-585335 not found
	
	** /stderr **
	I0731 11:12:52.330892 3747832 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:12:52.348947 3747832 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-ab16070d357b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:07:d3:36:48}}
	I0731 11:12:52.349358 3747832 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-78b0e162f9c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:0f:3e:1c:b0}}
	I0731 11:12:52.349777 3747832 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0x400000e300] misses:0}
	I0731 11:12:52.349816 3747832 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0731 11:12:52.349830 3747832 network_create.go:106] attempt to create docker network stopped-upgrade-585335 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0731 11:12:52.349895 3747832 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true stopped-upgrade-585335
	I0731 11:12:52.424131 3747832 network_create.go:90] docker network stopped-upgrade-585335 192.168.67.0/24 created
	I0731 11:12:52.424150 3747832 kic.go:106] calculated static IP "192.168.67.2" for the "stopped-upgrade-585335" container
	I0731 11:12:52.424225 3747832 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0731 11:12:52.440585 3747832 cli_runner.go:115] Run: docker volume create stopped-upgrade-585335 --label name.minikube.sigs.k8s.io=stopped-upgrade-585335 --label created_by.minikube.sigs.k8s.io=true
	I0731 11:12:52.458521 3747832 oci.go:102] Successfully created a docker volume stopped-upgrade-585335
	I0731 11:12:52.458598 3747832 cli_runner.go:115] Run: docker run --rm --name stopped-upgrade-585335-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-585335 --entrypoint /usr/bin/test -v stopped-upgrade-585335:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0731 11:12:53.567030 3747832 cli_runner.go:168] Completed: docker run --rm --name stopped-upgrade-585335-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-585335 --entrypoint /usr/bin/test -v stopped-upgrade-585335:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (1.108395689s)
	I0731 11:12:53.567048 3747832 oci.go:106] Successfully prepared a docker volume stopped-upgrade-585335
	W0731 11:12:53.567083 3747832 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0731 11:12:53.567090 3747832 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0731 11:12:53.567101 3747832 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0731 11:12:53.567122 3747832 kic.go:179] Starting extracting preloaded images to volume ...
	I0731 11:12:53.567147 3747832 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 11:12:53.567180 3747832 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v stopped-upgrade-585335:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 11:12:53.705922 3747832 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-585335 --name stopped-upgrade-585335 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-585335 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-585335 --network stopped-upgrade-585335 --ip 192.168.67.2 --volume stopped-upgrade-585335:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0731 11:12:54.150951 3747832 cli_runner.go:115] Run: docker container inspect stopped-upgrade-585335 --format={{.State.Running}}
	I0731 11:12:54.176070 3747832 cli_runner.go:115] Run: docker container inspect stopped-upgrade-585335 --format={{.State.Status}}
	I0731 11:12:54.201648 3747832 cli_runner.go:115] Run: docker exec stopped-upgrade-585335 stat /var/lib/dpkg/alternatives/iptables
	I0731 11:12:54.316596 3747832 oci.go:278] the created container "stopped-upgrade-585335" has a running status.
	I0731 11:12:54.316615 3747832 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/stopped-upgrade-585335/id_rsa...
	I0731 11:12:54.834719 3747832 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/stopped-upgrade-585335/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 11:12:54.869360 3747832 cli_runner.go:115] Run: docker container inspect stopped-upgrade-585335 --format={{.State.Status}}
	I0731 11:12:54.892609 3747832 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 11:12:54.892619 3747832 kic_runner.go:115] Args: [docker exec --privileged stopped-upgrade-585335 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 11:13:01.124811 3747832 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v stopped-upgrade-585335:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (7.557597461s)
	I0731 11:13:01.124828 3747832 kic.go:188] duration metric: took 7.557704 seconds to extract preloaded images to volume
	I0731 11:13:01.124908 3747832 cli_runner.go:115] Run: docker container inspect stopped-upgrade-585335 --format={{.State.Status}}
	I0731 11:13:01.147782 3747832 machine.go:88] provisioning docker machine ...
	I0731 11:13:01.147809 3747832 ubuntu.go:169] provisioning hostname "stopped-upgrade-585335"
	I0731 11:13:01.147877 3747832 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-585335
	I0731 11:13:01.164988 3747832 main.go:130] libmachine: Using SSH client type: native
	I0731 11:13:01.165205 3747832 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 35528 <nil> <nil>}
	I0731 11:13:01.165217 3747832 main.go:130] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-585335 && echo "stopped-upgrade-585335" | sudo tee /etc/hostname
	I0731 11:13:01.320557 3747832 main.go:130] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-585335
	
	I0731 11:13:01.320622 3747832 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-585335
	I0731 11:13:01.338886 3747832 main.go:130] libmachine: Using SSH client type: native
	I0731 11:13:01.339047 3747832 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 35528 <nil> <nil>}
	I0731 11:13:01.339065 3747832 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-585335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-585335/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-585335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 11:13:01.457991 3747832 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0731 11:13:01.458008 3747832 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16969-3616075/.minikube CaCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16969-3616075/.minikube}
	I0731 11:13:01.458030 3747832 ubuntu.go:177] setting up certificates
	I0731 11:13:01.458039 3747832 provision.go:83] configureAuth start
	I0731 11:13:01.458100 3747832 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-585335
	I0731 11:13:01.477468 3747832 provision.go:137] copyHostCerts
	I0731 11:13:01.477521 3747832 exec_runner.go:145] found /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem, removing ...
	I0731 11:13:01.477528 3747832 exec_runner.go:190] rm: /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem
	I0731 11:13:01.477605 3747832 exec_runner.go:152] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/cert.pem (1123 bytes)
	I0731 11:13:01.477687 3747832 exec_runner.go:145] found /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem, removing ...
	I0731 11:13:01.477690 3747832 exec_runner.go:190] rm: /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem
	I0731 11:13:01.477712 3747832 exec_runner.go:152] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/key.pem (1679 bytes)
	I0731 11:13:01.477758 3747832 exec_runner.go:145] found /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem, removing ...
	I0731 11:13:01.477762 3747832 exec_runner.go:190] rm: /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem
	I0731 11:13:01.477782 3747832 exec_runner.go:152] cp: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.pem (1082 bytes)
	I0731 11:13:01.477830 3747832 provision.go:111] generating server cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-585335 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-585335]
	I0731 11:13:01.825975 3747832 provision.go:171] copyRemoteCerts
	I0731 11:13:01.826026 3747832 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 11:13:01.826067 3747832 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-585335
	I0731 11:13:01.844362 3747832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35528 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/stopped-upgrade-585335/id_rsa Username:docker}
	I0731 11:13:01.934036 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 11:13:01.956361 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 11:13:01.979232 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 11:13:02.001961 3747832 provision.go:86] duration metric: configureAuth took 543.908757ms
	I0731 11:13:02.001978 3747832 ubuntu.go:193] setting minikube options for container-runtime
	I0731 11:13:02.002181 3747832 machine.go:91] provisioned docker machine in 854.388952ms
	I0731 11:13:02.002187 3747832 client.go:171] LocalClient.Create took 9.707719728s
	I0731 11:13:02.002197 3747832 start.go:168] duration metric: libmachine.API.Create for "stopped-upgrade-585335" took 9.707762845s
	I0731 11:13:02.002204 3747832 start.go:267] post-start starting for "stopped-upgrade-585335" (driver="docker")
	I0731 11:13:02.002208 3747832 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 11:13:02.002259 3747832 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 11:13:02.002298 3747832 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-585335
	I0731 11:13:02.024857 3747832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35528 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/stopped-upgrade-585335/id_rsa Username:docker}
	I0731 11:13:02.114420 3747832 ssh_runner.go:149] Run: cat /etc/os-release
	I0731 11:13:02.118337 3747832 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 11:13:02.118356 3747832 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 11:13:02.118369 3747832 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 11:13:02.118374 3747832 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0731 11:13:02.118382 3747832 filesync.go:126] Scanning /home/jenkins/minikube-integration/16969-3616075/.minikube/addons for local assets ...
	I0731 11:13:02.118440 3747832 filesync.go:126] Scanning /home/jenkins/minikube-integration/16969-3616075/.minikube/files for local assets ...
	I0731 11:13:02.118522 3747832 filesync.go:149] local asset: /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem -> 36214032.pem in /etc/ssl/certs
	I0731 11:13:02.118630 3747832 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0731 11:13:02.127382 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/ssl/certs/36214032.pem --> /etc/ssl/certs/36214032.pem (1708 bytes)
	I0731 11:13:02.149981 3747832 start.go:270] post-start completed in 147.763344ms
	I0731 11:13:02.150364 3747832 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-585335
	I0731 11:13:02.167641 3747832 profile.go:148] Saving config to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/config.json ...
	I0731 11:13:02.167908 3747832 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:13:02.167947 3747832 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-585335
	I0731 11:13:02.185289 3747832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35528 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/stopped-upgrade-585335/id_rsa Username:docker}
	I0731 11:13:02.272216 3747832 start.go:129] duration metric: createHost completed in 9.980751816s
	I0731 11:13:02.272229 3747832 start.go:80] releasing machines lock for "stopped-upgrade-585335", held for 9.980854856s
	I0731 11:13:02.272314 3747832 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-585335
	I0731 11:13:02.288897 3747832 ssh_runner.go:149] Run: systemctl --version
	I0731 11:13:02.288939 3747832 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-585335
	I0731 11:13:02.288975 3747832 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0731 11:13:02.289024 3747832 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-585335
	I0731 11:13:02.314390 3747832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35528 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/stopped-upgrade-585335/id_rsa Username:docker}
	I0731 11:13:02.326426 3747832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35528 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/stopped-upgrade-585335/id_rsa Username:docker}
	I0731 11:13:02.554328 3747832 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0731 11:13:02.567280 3747832 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0731 11:13:02.579604 3747832 docker.go:153] disabling docker service ...
	I0731 11:13:02.579663 3747832 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0731 11:13:02.601718 3747832 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0731 11:13:02.614505 3747832 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0731 11:13:02.710890 3747832 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0731 11:13:02.807133 3747832 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0731 11:13:02.819077 3747832 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 11:13:02.835468 3747832 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0731 11:13:02.852603 3747832 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 11:13:02.861284 3747832 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 11:13:02.869968 3747832 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0731 11:13:02.976442 3747832 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0731 11:13:03.068882 3747832 start.go:386] Will wait 60s for socket path /run/containerd/containerd.sock
	I0731 11:13:03.068950 3747832 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0731 11:13:03.074008 3747832 start.go:411] Will wait 60s for crictl version
	I0731 11:13:03.074077 3747832 ssh_runner.go:149] Run: sudo crictl version
	I0731 11:13:03.113319 3747832 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:03Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0731 11:13:04.113857 3743291 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 11:13:04.142683 3743291 retry.go:31] will retry after 22.071455118s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0731 11:13:14.160125 3747832 ssh_runner.go:149] Run: sudo crictl version
	I0731 11:13:14.190110 3747832 start.go:420] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.6
	RuntimeApiVersion:  v1alpha2
	I0731 11:13:14.190164 3747832 ssh_runner.go:149] Run: containerd --version
	I0731 11:13:14.218136 3747832 ssh_runner.go:149] Run: containerd --version
	I0731 11:13:14.248711 3747832 out.go:165] * Preparing Kubernetes v1.21.2 on containerd 1.4.6 ...
	I0731 11:13:14.248797 3747832 cli_runner.go:115] Run: docker network inspect stopped-upgrade-585335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:13:14.266339 3747832 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0731 11:13:14.270519 3747832 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 11:13:14.282519 3747832 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0731 11:13:14.282579 3747832 ssh_runner.go:149] Run: sudo crictl images --output json
	I0731 11:13:14.318439 3747832 containerd.go:575] all images are preloaded for containerd runtime.
	I0731 11:13:14.318450 3747832 containerd.go:479] Images already preloaded, skipping extraction
	I0731 11:13:14.318514 3747832 ssh_runner.go:149] Run: sudo crictl images --output json
	I0731 11:13:14.349918 3747832 containerd.go:575] all images are preloaded for containerd runtime.
	I0731 11:13:14.349930 3747832 cache_images.go:74] Images are preloaded, skipping loading
	I0731 11:13:14.349993 3747832 ssh_runner.go:149] Run: sudo crictl info
	I0731 11:13:14.377777 3747832 cni.go:93] Creating CNI manager for ""
	I0731 11:13:14.377787 3747832 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0731 11:13:14.377805 3747832 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 11:13:14.377817 3747832 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-585335 NodeName:stopped-upgrade-585335 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/min
ikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0731 11:13:14.377943 3747832 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "stopped-upgrade-585335"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0731 11:13:14.378019 3747832 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=stopped-upgrade-585335 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.2 ClusterName:stopped-upgrade-585335 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 11:13:14.378079 3747832 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
	I0731 11:13:14.387156 3747832 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 11:13:14.387211 3747832 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 11:13:14.395493 3747832 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (567 bytes)
	I0731 11:13:14.412263 3747832 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 11:13:14.429053 3747832 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1888 bytes)
	I0731 11:13:14.445694 3747832 ssh_runner.go:149] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0731 11:13:14.449608 3747832 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 11:13:14.461296 3747832 certs.go:52] Setting up /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335 for IP: 192.168.67.2
	I0731 11:13:14.461341 3747832 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.key
	I0731 11:13:14.461355 3747832 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.key
	I0731 11:13:14.461401 3747832 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/client.key
	I0731 11:13:14.461426 3747832 crypto.go:69] Generating cert /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/client.crt with IP's: []
	I0731 11:13:15.683024 3747832 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/client.crt ...
	I0731 11:13:15.683042 3747832 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/client.crt: {Name:mka872fd327ec11081ab068e9cde313e0ecf6e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:15.684810 3747832 crypto.go:165] Writing key to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/client.key ...
	I0731 11:13:15.684822 3747832 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/client.key: {Name:mk13e071759b490ad705dfe4136101ebf6a0cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:15.686185 3747832 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/apiserver.key.c7fa3a9e
	I0731 11:13:15.686193 3747832 crypto.go:69] Generating cert /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0731 11:13:16.267414 3747832 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/apiserver.crt.c7fa3a9e ...
	I0731 11:13:16.267434 3747832 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/apiserver.crt.c7fa3a9e: {Name:mke81ea898bf66da8f8efd2a5165d325e294ac30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:16.268324 3747832 crypto.go:165] Writing key to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/apiserver.key.c7fa3a9e ...
	I0731 11:13:16.268335 3747832 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/apiserver.key.c7fa3a9e: {Name:mke18d6960725ecb91badded65dc5b20f3d15ecf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:16.268980 3747832 certs.go:305] copying /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/apiserver.crt
	I0731 11:13:16.269046 3747832 certs.go:309] copying /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/apiserver.key
	I0731 11:13:16.269088 3747832 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/proxy-client.key
	I0731 11:13:16.269093 3747832 crypto.go:69] Generating cert /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/proxy-client.crt with IP's: []
	I0731 11:13:17.448245 3747832 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/proxy-client.crt ...
	I0731 11:13:17.448261 3747832 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/proxy-client.crt: {Name:mk1aed5dd2186e41f514091cd60f421ee29fce48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:17.449129 3747832 crypto.go:165] Writing key to /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/proxy-client.key ...
	I0731 11:13:17.449139 3747832 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/proxy-client.key: {Name:mk4131d55b0bb7d15d1742aa2b816c960802509d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:17.449673 3747832 certs.go:369] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/3621403.pem (1338 bytes)
	W0731 11:13:17.449723 3747832 certs.go:365] ignoring /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/3621403_empty.pem, impossibly tiny 0 bytes
	I0731 11:13:17.449730 3747832 certs.go:369] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 11:13:17.449755 3747832 certs.go:369] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/ca.pem (1082 bytes)
	I0731 11:13:17.449779 3747832 certs.go:369] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/cert.pem (1123 bytes)
	I0731 11:13:17.449814 3747832 certs.go:369] found cert: /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/home/jenkins/minikube-integration/16969-3616075/.minikube/certs/key.pem (1679 bytes)
	I0731 11:13:17.450930 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 11:13:17.473192 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 11:13:17.495878 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 11:13:17.518263 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/stopped-upgrade-585335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 11:13:17.540194 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 11:13:17.562743 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 11:13:17.584107 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 11:13:17.606138 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 11:13:17.628844 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 11:13:17.651053 3747832 ssh_runner.go:316] scp /home/jenkins/minikube-integration/16969-3616075/.minikube/certs/3621403.pem --> /usr/share/ca-certificates/3621403.pem (1338 bytes)
	I0731 11:13:17.673546 3747832 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 11:13:17.690305 3747832 ssh_runner.go:149] Run: openssl version
	I0731 11:13:17.696739 3747832 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 11:13:17.706559 3747832 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:13:17.710827 3747832 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jul 31 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:13:17.710884 3747832 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:13:17.718072 3747832 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 11:13:17.727829 3747832 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3621403.pem && ln -fs /usr/share/ca-certificates/3621403.pem /etc/ssl/certs/3621403.pem"
	I0731 11:13:17.737082 3747832 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3621403.pem
	I0731 11:13:17.741547 3747832 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jul 31 10:43 /usr/share/ca-certificates/3621403.pem
	I0731 11:13:17.741597 3747832 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3621403.pem
	I0731 11:13:17.748346 3747832 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3621403.pem /etc/ssl/certs/51391683.0"
	I0731 11:13:17.758067 3747832 kubeadm.go:390] StartCluster: {Name:stopped-upgrade-585335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:stopped-upgrade-585335 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0731 11:13:17.758143 3747832 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0731 11:13:17.758194 3747832 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 11:13:17.790112 3747832 cri.go:76] found id: ""
	I0731 11:13:17.790182 3747832 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 11:13:17.799846 3747832 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 11:13:17.808886 3747832 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0731 11:13:17.808942 3747832 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 11:13:17.818330 3747832 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 11:13:17.818359 3747832 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0731 11:13:18.962610 3747832 out.go:192]   - Generating certificates and keys ...
	I0731 11:13:26.215081 3743291 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 11:13:26.250326 3743291 out.go:177] 
	W0731 11:13:26.252059 3743291 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:26Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0731 11:13:26.252075 3743291 out.go:239] * 
	W0731 11:13:26.255257 3743291 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:13:26.257485 3743291 out.go:177] 
	
	* 
	* ==> container status <==
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2023-07-31 11:12:23 UTC, end at Mon 2023-07-31 11:13:27 UTC. --
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.778297011Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.778363850Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.778427677Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.778491447Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.778592969Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.778724424Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.779253301Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.779369297Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.779497551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.779566334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.779628258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.779693817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.779754133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.779821858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.779890814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.779955126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.780018551Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.780122764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.780187675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.780255884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.780313468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.780572864Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.780679539Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Jul 31 11:12:31 missing-upgrade-953629 containerd[629]: time="2023-07-31T11:12:31.780805931Z" level=info msg="containerd successfully booted in 0.057169s"
	Jul 31 11:12:31 missing-upgrade-953629 systemd[1]: Started containerd container runtime.
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001132] FS-Cache: O-key=[8] '14475c0100000000'
	[  +0.000715] FS-Cache: N-cookie c=0000009c [p=00000093 fl=2 nc=0 na=1]
	[  +0.000955] FS-Cache: N-cookie d=00000000a94fc6de{9p.inode} n=00000000ec69e0f1
	[  +0.001086] FS-Cache: N-key=[8] '14475c0100000000'
	[  +0.002815] FS-Cache: Duplicate cookie detected
	[  +0.000698] FS-Cache: O-cookie c=00000096 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001025] FS-Cache: O-cookie d=00000000a94fc6de{9p.inode} n=0000000023b8fd7c
	[  +0.001072] FS-Cache: O-key=[8] '14475c0100000000'
	[  +0.000724] FS-Cache: N-cookie c=0000009d [p=00000093 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000a94fc6de{9p.inode} n=0000000037c70260
	[  +0.001110] FS-Cache: N-key=[8] '14475c0100000000'
	[  +1.689708] FS-Cache: Duplicate cookie detected
	[  +0.000773] FS-Cache: O-cookie c=00000094 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001038] FS-Cache: O-cookie d=00000000a94fc6de{9p.inode} n=00000000011fd128
	[  +0.001150] FS-Cache: O-key=[8] '13475c0100000000'
	[  +0.000732] FS-Cache: N-cookie c=0000009f [p=00000093 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000a94fc6de{9p.inode} n=0000000065a709cb
	[  +0.001054] FS-Cache: N-key=[8] '13475c0100000000'
	[  +0.343766] FS-Cache: Duplicate cookie detected
	[  +0.000720] FS-Cache: O-cookie c=00000099 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001003] FS-Cache: O-cookie d=00000000a94fc6de{9p.inode} n=00000000f52480e8
	[  +0.001080] FS-Cache: O-key=[8] '19475c0100000000'
	[  +0.000738] FS-Cache: N-cookie c=000000a0 [p=00000093 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000a94fc6de{9p.inode} n=000000007e1f00b7
	[  +0.001196] FS-Cache: N-key=[8] '19475c0100000000'
	
	* 
	* ==> kernel <==
	*  11:13:28 up 18:55,  0 users,  load average: 2.49, 2.80, 2.26
	Linux missing-upgrade-953629 5.15.0-1040-aws #45~20.04.1-Ubuntu SMP Tue Jul 11 19:11:12 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2023-07-31 11:12:23 UTC, end at Mon 2023-07-31 11:13:28 UTC. --
	-- No entries --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 11:13:27.255318 3749315 logs.go:281] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:27Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0731 11:13:27.291411 3749315 logs.go:281] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:27Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0731 11:13:27.330670 3749315 logs.go:281] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:27Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0731 11:13:27.362756 3749315 logs.go:281] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:27Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0731 11:13:27.390335 3749315 logs.go:281] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:27Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0731 11:13:27.418401 3749315 logs.go:281] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:27Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0731 11:13:27.445933 3749315 logs.go:281] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:27Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0731 11:13:27.477980 3749315 logs.go:281] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:27Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0731 11:13:27.835083 3749315 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-31T11:13:27Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2023-07-31T11:13:27Z\" level=fatal msg=\"listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E0731 11:13:28.202258 3749315 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p missing-upgrade-953629 -n missing-upgrade-953629
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p missing-upgrade-953629 -n missing-upgrade-953629: exit status 2 (321.968925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "missing-upgrade-953629" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "missing-upgrade-953629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-953629
E0731 11:13:28.952453 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-953629: (2.301615701s)
--- FAIL: TestMissingContainerUpgrade (219.31s)

                                                
                                    

Test pass (267/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.85
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.27.3/json-events 7.66
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.21
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.57
22 TestAddons/Setup 123.55
24 TestAddons/parallel/Registry 15.07
26 TestAddons/parallel/InspektorGadget 10.91
27 TestAddons/parallel/MetricsServer 6.07
30 TestAddons/parallel/CSI 38.92
31 TestAddons/parallel/Headlamp 11.79
32 TestAddons/parallel/CloudSpanner 5.67
35 TestAddons/serial/GCPAuth/Namespaces 0.17
36 TestAddons/StoppedEnableDisable 12.34
37 TestCertOptions 44.31
38 TestCertExpiration 244.67
40 TestForceSystemdFlag 43.44
41 TestForceSystemdEnv 44.41
42 TestDockerEnvContainerd 50.38
47 TestErrorSpam/setup 28.74
48 TestErrorSpam/start 0.86
49 TestErrorSpam/status 1.08
50 TestErrorSpam/pause 1.8
51 TestErrorSpam/unpause 1.87
52 TestErrorSpam/stop 1.98
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 55
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 15.09
59 TestFunctional/serial/KubeContext 0.07
60 TestFunctional/serial/KubectlGetPods 0.09
63 TestFunctional/serial/CacheCmd/cache/add_remote 4.13
64 TestFunctional/serial/CacheCmd/cache/add_local 1.46
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
66 TestFunctional/serial/CacheCmd/cache/list 0.05
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.23
69 TestFunctional/serial/CacheCmd/cache/delete 0.11
70 TestFunctional/serial/MinikubeKubectlCmd 0.14
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
72 TestFunctional/serial/ExtraConfig 58.09
73 TestFunctional/serial/ComponentHealth 0.1
74 TestFunctional/serial/LogsCmd 1.73
75 TestFunctional/serial/LogsFileCmd 1.73
76 TestFunctional/serial/InvalidService 4.69
78 TestFunctional/parallel/ConfigCmd 0.45
79 TestFunctional/parallel/DashboardCmd 7.9
80 TestFunctional/parallel/DryRun 0.7
81 TestFunctional/parallel/InternationalLanguage 0.24
82 TestFunctional/parallel/StatusCmd 1.27
86 TestFunctional/parallel/ServiceCmdConnect 8.68
87 TestFunctional/parallel/AddonsCmd 0.17
88 TestFunctional/parallel/PersistentVolumeClaim 24.22
90 TestFunctional/parallel/SSHCmd 0.74
91 TestFunctional/parallel/CpCmd 1.55
93 TestFunctional/parallel/FileSync 0.4
94 TestFunctional/parallel/CertSync 2.09
98 TestFunctional/parallel/NodeLabels 0.1
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.81
102 TestFunctional/parallel/License 0.33
103 TestFunctional/parallel/Version/short 0.06
104 TestFunctional/parallel/Version/components 0.87
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.55
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
109 TestFunctional/parallel/ImageCommands/ImageBuild 3.52
110 TestFunctional/parallel/ImageCommands/Setup 1.81
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
115 TestFunctional/parallel/ServiceCmd/DeployApp 9.38
118 TestFunctional/parallel/ServiceCmd/List 0.45
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.41
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
121 TestFunctional/parallel/ServiceCmd/Format 0.49
122 TestFunctional/parallel/ServiceCmd/URL 0.52
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
126 TestFunctional/parallel/ImageCommands/ImageRemove 0.73
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.63
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
139 TestFunctional/parallel/ProfileCmd/profile_list 0.39
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
141 TestFunctional/parallel/MountCmd/any-port 6.09
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
144 TestFunctional/delete_addon-resizer_images 0.08
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestIngressAddonLegacy/StartLegacyK8sCluster 82.62
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.96
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.63
157 TestJSONOutput/start/Command 59.44
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.79
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.69
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.78
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.23
182 TestKicCustomNetwork/create_custom_network 41.7
183 TestKicCustomNetwork/use_default_bridge_network 32.81
184 TestKicExistingNetwork 36.52
185 TestKicCustomSubnet 38.57
186 TestKicStaticIP 33.88
187 TestMainNoArgs 0.05
188 TestMinikubeProfile 68.66
191 TestMountStart/serial/StartWithMountFirst 8.75
192 TestMountStart/serial/VerifyMountFirst 0.29
193 TestMountStart/serial/StartWithMountSecond 7.3
194 TestMountStart/serial/VerifyMountSecond 0.28
195 TestMountStart/serial/DeleteFirst 1.65
196 TestMountStart/serial/VerifyMountPostDelete 0.29
197 TestMountStart/serial/Stop 1.23
198 TestMountStart/serial/RestartStopped 7.9
199 TestMountStart/serial/VerifyMountPostStop 0.27
202 TestMultiNode/serial/FreshStart2Nodes 109.59
203 TestMultiNode/serial/DeployApp2Nodes 9.13
204 TestMultiNode/serial/PingHostFrom2Pods 1.1
205 TestMultiNode/serial/AddNode 17.58
206 TestMultiNode/serial/ProfileList 0.33
207 TestMultiNode/serial/CopyFile 10.5
208 TestMultiNode/serial/StopNode 2.33
209 TestMultiNode/serial/StartAfterStop 11.78
210 TestMultiNode/serial/RestartKeepsNodes 142.19
211 TestMultiNode/serial/DeleteNode 4.99
212 TestMultiNode/serial/StopMultiNode 24.03
213 TestMultiNode/serial/RestartMultiNode 97.44
214 TestMultiNode/serial/ValidateNameConflict 43.22
219 TestPreload 173.89
221 TestScheduledStopUnix 120.29
224 TestInsufficientStorage 12.57
225 TestRunningBinaryUpgrade 121.97
227 TestKubernetesUpgrade 153.71
230 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
232 TestPause/serial/Start 70.34
233 TestNoKubernetes/serial/StartWithK8s 44.96
234 TestNoKubernetes/serial/StartWithStopK8s 22.49
235 TestNoKubernetes/serial/Start 6.63
236 TestPause/serial/SecondStartNoReconfiguration 14.87
237 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
238 TestNoKubernetes/serial/ProfileList 1.09
239 TestNoKubernetes/serial/Stop 1.22
240 TestNoKubernetes/serial/StartNoArgs 6.69
241 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
242 TestPause/serial/Pause 0.9
243 TestPause/serial/VerifyStatus 0.44
244 TestPause/serial/Unpause 0.89
245 TestPause/serial/PauseAgain 1.08
246 TestPause/serial/DeletePaused 3.12
247 TestPause/serial/VerifyDeletedResources 0.22
248 TestStoppedBinaryUpgrade/Setup 1.5
249 TestStoppedBinaryUpgrade/Upgrade 158.94
250 TestStoppedBinaryUpgrade/MinikubeLogs 1.54
265 TestNetworkPlugins/group/false 4.34
270 TestStartStop/group/old-k8s-version/serial/FirstStart 127.83
271 TestStartStop/group/old-k8s-version/serial/DeployApp 9.55
272 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.04
273 TestStartStop/group/old-k8s-version/serial/Stop 12.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
275 TestStartStop/group/old-k8s-version/serial/SecondStart 665.86
277 TestStartStop/group/no-preload/serial/FirstStart 64.82
278 TestStartStop/group/no-preload/serial/DeployApp 8.54
279 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
280 TestStartStop/group/no-preload/serial/Stop 12.24
281 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
282 TestStartStop/group/no-preload/serial/SecondStart 344.98
283 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.03
284 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
285 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.4
286 TestStartStop/group/no-preload/serial/Pause 3.2
288 TestStartStop/group/embed-certs/serial/FirstStart 85.69
289 TestStartStop/group/embed-certs/serial/DeployApp 7.53
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.28
291 TestStartStop/group/embed-certs/serial/Stop 12.14
292 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
293 TestStartStop/group/embed-certs/serial/SecondStart 344.18
294 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
295 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.17
296 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.52
297 TestStartStop/group/old-k8s-version/serial/Pause 4.45
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 96.38
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.52
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.15
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
304 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 359.68
305 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.04
306 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
307 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.4
308 TestStartStop/group/embed-certs/serial/Pause 3.23
310 TestStartStop/group/newest-cni/serial/FirstStart 46.02
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
313 TestStartStop/group/newest-cni/serial/Stop 1.27
314 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/newest-cni/serial/SecondStart 43.99
316 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
317 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
318 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
319 TestStartStop/group/newest-cni/serial/Pause 3.22
320 TestNetworkPlugins/group/auto/Start 90
321 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.03
322 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
323 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
324 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.29
325 TestNetworkPlugins/group/kindnet/Start 98.45
326 TestNetworkPlugins/group/auto/KubeletFlags 0.32
327 TestNetworkPlugins/group/auto/NetCatPod 9.39
328 TestNetworkPlugins/group/auto/DNS 0.28
329 TestNetworkPlugins/group/auto/Localhost 0.21
330 TestNetworkPlugins/group/auto/HairPin 0.23
331 TestNetworkPlugins/group/calico/Start 75.79
332 TestNetworkPlugins/group/kindnet/ControllerPod 5.06
333 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
334 TestNetworkPlugins/group/kindnet/NetCatPod 11.56
335 TestNetworkPlugins/group/kindnet/DNS 0.22
336 TestNetworkPlugins/group/kindnet/Localhost 0.17
337 TestNetworkPlugins/group/kindnet/HairPin 0.2
338 TestNetworkPlugins/group/calico/ControllerPod 5.05
339 TestNetworkPlugins/group/calico/KubeletFlags 0.42
340 TestNetworkPlugins/group/calico/NetCatPod 10.58
341 TestNetworkPlugins/group/custom-flannel/Start 62.7
342 TestNetworkPlugins/group/calico/DNS 0.26
343 TestNetworkPlugins/group/calico/Localhost 0.25
344 TestNetworkPlugins/group/calico/HairPin 0.24
345 TestNetworkPlugins/group/enable-default-cni/Start 46.85
346 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
347 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.36
348 TestNetworkPlugins/group/custom-flannel/DNS 0.3
349 TestNetworkPlugins/group/custom-flannel/Localhost 0.3
350 TestNetworkPlugins/group/custom-flannel/HairPin 0.28
351 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
352 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.42
353 TestNetworkPlugins/group/enable-default-cni/DNS 0.29
354 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
355 TestNetworkPlugins/group/enable-default-cni/HairPin 0.27
356 TestNetworkPlugins/group/flannel/Start 75.55
357 TestNetworkPlugins/group/bridge/Start 89.96
358 TestNetworkPlugins/group/flannel/ControllerPod 5.03
359 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
360 TestNetworkPlugins/group/flannel/NetCatPod 8.34
361 TestNetworkPlugins/group/flannel/DNS 0.23
362 TestNetworkPlugins/group/flannel/Localhost 0.18
363 TestNetworkPlugins/group/flannel/HairPin 0.18
364 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
365 TestNetworkPlugins/group/bridge/NetCatPod 11.51
366 TestNetworkPlugins/group/bridge/DNS 0.18
367 TestNetworkPlugins/group/bridge/Localhost 0.17
368 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (13.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-106199 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-106199 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.846630243s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (13.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-106199
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-106199: exit status 85 (74.920399ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-106199 | jenkins | v1.31.1 | 31 Jul 23 10:37 UTC |          |
	|         | -p download-only-106199        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 10:37:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 10:37:45.635988 3621408 out.go:296] Setting OutFile to fd 1 ...
	I0731 10:37:45.636129 3621408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:37:45.636138 3621408 out.go:309] Setting ErrFile to fd 2...
	I0731 10:37:45.636144 3621408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:37:45.636489 3621408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
	W0731 10:37:45.636654 3621408 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16969-3616075/.minikube/config/config.json: open /home/jenkins/minikube-integration/16969-3616075/.minikube/config/config.json: no such file or directory
	I0731 10:37:45.637067 3621408 out.go:303] Setting JSON to true
	I0731 10:37:45.638152 3621408 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":66013,"bootTime":1690733853,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 10:37:45.638234 3621408 start.go:138] virtualization:  
	I0731 10:37:45.641300 3621408 out.go:97] [download-only-106199] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 10:37:45.643344 3621408 out.go:169] MINIKUBE_LOCATION=16969
	W0731 10:37:45.641521 3621408 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 10:37:45.641569 3621408 notify.go:220] Checking for updates...
	I0731 10:37:45.646932 3621408 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:37:45.648824 3621408 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	I0731 10:37:45.650443 3621408 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	I0731 10:37:45.652145 3621408 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0731 10:37:45.655655 3621408 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 10:37:45.655949 3621408 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 10:37:45.684574 3621408 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 10:37:45.684663 3621408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:37:45.760989 3621408 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-07-31 10:37:45.750911184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 10:37:45.761088 3621408 docker.go:294] overlay module found
	I0731 10:37:45.762915 3621408 out.go:97] Using the docker driver based on user configuration
	I0731 10:37:45.762938 3621408 start.go:298] selected driver: docker
	I0731 10:37:45.762944 3621408 start.go:898] validating driver "docker" against <nil>
	I0731 10:37:45.763044 3621408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:37:45.831707 3621408 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-07-31 10:37:45.822950879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 10:37:45.831860 3621408 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 10:37:45.832130 3621408 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0731 10:37:45.832292 3621408 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 10:37:45.834402 3621408 out.go:169] Using Docker driver with root privileges
	I0731 10:37:45.836217 3621408 cni.go:84] Creating CNI manager for ""
	I0731 10:37:45.836233 3621408 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0731 10:37:45.836246 3621408 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 10:37:45.836259 3621408 start_flags.go:319] config:
	{Name:download-only-106199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-106199 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 10:37:45.837892 3621408 out.go:97] Starting control plane node download-only-106199 in cluster download-only-106199
	I0731 10:37:45.837908 3621408 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0731 10:37:45.839421 3621408 out.go:97] Pulling base image ...
	I0731 10:37:45.839443 3621408 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0731 10:37:45.839589 3621408 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 10:37:45.858310 3621408 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0731 10:37:45.858507 3621408 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0731 10:37:45.858605 3621408 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0731 10:37:45.909853 3621408 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0731 10:37:45.909874 3621408 cache.go:57] Caching tarball of preloaded images
	I0731 10:37:45.910025 3621408 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0731 10:37:45.912004 3621408 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0731 10:37:45.912023 3621408 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0731 10:37:46.039041 3621408 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0731 10:37:50.773316 3621408 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-106199"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (7.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-106199 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-106199 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.660254528s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (7.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-106199
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-106199: exit status 85 (70.994872ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-106199 | jenkins | v1.31.1 | 31 Jul 23 10:37 UTC |          |
	|         | -p download-only-106199        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-106199 | jenkins | v1.31.1 | 31 Jul 23 10:37 UTC |          |
	|         | -p download-only-106199        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 10:37:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 10:37:59.555500 3621484 out.go:296] Setting OutFile to fd 1 ...
	I0731 10:37:59.555618 3621484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:37:59.555627 3621484 out.go:309] Setting ErrFile to fd 2...
	I0731 10:37:59.555633 3621484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:37:59.555895 3621484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
	W0731 10:37:59.556017 3621484 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16969-3616075/.minikube/config/config.json: open /home/jenkins/minikube-integration/16969-3616075/.minikube/config/config.json: no such file or directory
	I0731 10:37:59.556223 3621484 out.go:303] Setting JSON to true
	I0731 10:37:59.557177 3621484 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":66027,"bootTime":1690733853,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 10:37:59.557240 3621484 start.go:138] virtualization:  
	I0731 10:37:59.559704 3621484 out.go:97] [download-only-106199] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 10:37:59.561575 3621484 out.go:169] MINIKUBE_LOCATION=16969
	I0731 10:37:59.559948 3621484 notify.go:220] Checking for updates...
	I0731 10:37:59.563339 3621484 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:37:59.565167 3621484 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	I0731 10:37:59.566995 3621484 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	I0731 10:37:59.568802 3621484 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0731 10:37:59.571766 3621484 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 10:37:59.572201 3621484 config.go:182] Loaded profile config "download-only-106199": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0731 10:37:59.572251 3621484 start.go:806] api.Load failed for download-only-106199: filestore "download-only-106199": Docker machine "download-only-106199" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0731 10:37:59.572408 3621484 driver.go:373] Setting default libvirt URI to qemu:///system
	W0731 10:37:59.572433 3621484 start.go:806] api.Load failed for download-only-106199: filestore "download-only-106199": Docker machine "download-only-106199" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0731 10:37:59.594843 3621484 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 10:37:59.594918 3621484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:37:59.680259 3621484 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-31 10:37:59.670760577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 10:37:59.680359 3621484 docker.go:294] overlay module found
	I0731 10:37:59.682294 3621484 out.go:97] Using the docker driver based on existing profile
	I0731 10:37:59.682316 3621484 start.go:298] selected driver: docker
	I0731 10:37:59.682322 3621484 start.go:898] validating driver "docker" against &{Name:download-only-106199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-106199 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 10:37:59.682504 3621484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:37:59.756801 3621484 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-31 10:37:59.747990217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 10:37:59.757233 3621484 cni.go:84] Creating CNI manager for ""
	I0731 10:37:59.757248 3621484 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0731 10:37:59.757259 3621484 start_flags.go:319] config:
	{Name:download-only-106199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-106199 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 10:37:59.759218 3621484 out.go:97] Starting control plane node download-only-106199 in cluster download-only-106199
	I0731 10:37:59.759236 3621484 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0731 10:37:59.760951 3621484 out.go:97] Pulling base image ...
	I0731 10:37:59.760995 3621484 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0731 10:37:59.761065 3621484 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 10:37:59.776885 3621484 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0731 10:37:59.776998 3621484 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0731 10:37:59.777025 3621484 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0731 10:37:59.777032 3621484 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0731 10:37:59.777041 3621484 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0731 10:37:59.829641 3621484 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4
	I0731 10:37:59.829662 3621484 cache.go:57] Caching tarball of preloaded images
	I0731 10:37:59.829808 3621484 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0731 10:37:59.831738 3621484 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0731 10:37:59.831759 3621484 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4 ...
	I0731 10:37:59.957349 3621484 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:14a60dcdae19ae70139b18fd027fe33b -> /home/jenkins/minikube-integration/16969-3616075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-106199"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-106199
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-830617 --alsologtostderr --binary-mirror http://127.0.0.1:45625 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-830617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-830617
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/Setup (123.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-315335 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-315335 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m3.548166936s)
--- PASS: TestAddons/Setup (123.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 29.664367ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-v9cxf" [03de58e9-3f67-44c3-8965-868a902feada] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.018175687s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pww6s" [f217c09b-8dba-4647-baa8-07ab108407df] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018657955s
addons_test.go:316: (dbg) Run:  kubectl --context addons-315335 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-315335 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-315335 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.962060728s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-315335 ip
2023/07/31 10:40:26 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-315335 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.07s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6kpcc" [4c3d3185-a291-48b5-9ac6-899943e49369] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.014778874s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-315335
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-315335: (5.890990197s)
--- PASS: TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.07s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 3.596666ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7746886d4f-zmc78" [b317a770-8561-4b60-aded-a636d40c178a] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014121229s
addons_test.go:391: (dbg) Run:  kubectl --context addons-315335 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-315335 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.07s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 6.325755ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-315335 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-315335 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-315335 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-315335 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-315335 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-315335 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-315335 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-315335 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-315335 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-315335 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9fb81ff1-3bd6-4ac5-b217-7491d9ac39e0] Pending
helpers_test.go:344: "task-pv-pod" [9fb81ff1-3bd6-4ac5-b217-7491d9ac39e0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9fb81ff1-3bd6-4ac5-b217-7491d9ac39e0] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.012283066s
addons_test.go:560: (dbg) Run:  kubectl --context addons-315335 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-315335 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-315335 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-315335 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-315335 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-315335 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-315335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-315335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-315335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-315335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-315335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-315335 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b0b7ff5f-bc59-49c0-b39c-3e76c1bc4325] Pending
helpers_test.go:344: "task-pv-pod-restore" [b0b7ff5f-bc59-49c0-b39c-3e76c1bc4325] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.021920247s
addons_test.go:602: (dbg) Run:  kubectl --context addons-315335 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-315335 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-315335 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-315335 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-315335 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.877712991s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-315335 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (38.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-315335 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-315335 --alsologtostderr -v=1: (1.748609278s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-9vz65" [acb1973f-762e-4402-8559-17d604cc8077] Pending
helpers_test.go:344: "headlamp-66f6498c69-9vz65" [acb1973f-762e-4402-8559-17d604cc8077] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-9vz65" [acb1973f-762e-4402-8559-17d604cc8077] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.035767028s
--- PASS: TestAddons/parallel/Headlamp (11.79s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-hrknx" [199feb9d-29db-45cf-94e0-47f047365345] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014743608s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-315335
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-315335 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-315335 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-315335
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-315335: (12.059434576s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-315335
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-315335
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-315335
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (44.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-488176 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-488176 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (41.579017885s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-488176 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-488176 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-488176 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-488176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-488176
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-488176: (2.090668498s)
--- PASS: TestCertOptions (44.31s)

                                                
                                    
x
+
TestCertExpiration (244.67s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-326876 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-326876 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (43.098319892s)
E0731 11:18:15.434081 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 11:18:28.951987 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-326876 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-326876 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (19.276417472s)
helpers_test.go:175: Cleaning up "cert-expiration-326876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-326876
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-326876: (2.29072264s)
--- PASS: TestCertExpiration (244.67s)

                                                
                                    
x
+
TestForceSystemdFlag (43.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-019462 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-019462 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.005765342s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-019462 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-019462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-019462
E0731 11:15:59.297911 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-019462: (2.140290378s)
--- PASS: TestForceSystemdFlag (43.44s)

                                                
                                    
x
+
TestForceSystemdEnv (44.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-514711 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-514711 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.622455037s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-514711 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-514711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-514711
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-514711: (2.368541019s)
--- PASS: TestForceSystemdEnv (44.41s)

                                                
                                    
x
+
TestDockerEnvContainerd (50.38s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-520617 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-520617 --driver=docker  --container-runtime=containerd: (34.10925604s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-520617"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-520617": (1.284266687s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ydnJdUIu833H/agent.3636843" SSH_AGENT_PID="3636844" DOCKER_HOST=ssh://docker@127.0.0.1:35343 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ydnJdUIu833H/agent.3636843" SSH_AGENT_PID="3636844" DOCKER_HOST=ssh://docker@127.0.0.1:35343 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ydnJdUIu833H/agent.3636843" SSH_AGENT_PID="3636844" DOCKER_HOST=ssh://docker@127.0.0.1:35343 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.631699637s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ydnJdUIu833H/agent.3636843" SSH_AGENT_PID="3636844" DOCKER_HOST=ssh://docker@127.0.0.1:35343 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-520617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-520617
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-520617: (2.249452932s)
--- PASS: TestDockerEnvContainerd (50.38s)

                                                
                                    
x
+
TestErrorSpam/setup (28.74s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-515399 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-515399 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-515399 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-515399 --driver=docker  --container-runtime=containerd: (28.744814563s)
--- PASS: TestErrorSpam/setup (28.74s)

                                                
                                    
x
+
TestErrorSpam/start (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 start --dry-run
--- PASS: TestErrorSpam/start (0.86s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 pause
--- PASS: TestErrorSpam/pause (1.80s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (1.98s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 stop: (1.790783278s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515399 --log_dir /tmp/nospam-515399 stop
--- PASS: TestErrorSpam/stop (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16969-3616075/.minikube/files/etc/test/nested/copy/3621403/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-302253 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-302253 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (55.004677037s)
--- PASS: TestFunctional/serial/StartWithProxy (55.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.09s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-302253 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-302253 --alsologtostderr -v=8: (15.091227645s)
functional_test.go:659: soft start took 15.091735304s for "functional-302253" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.09s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-302253 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-302253 cache add registry.k8s.io/pause:3.1: (1.543026714s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-302253 cache add registry.k8s.io/pause:3.3: (1.406425249s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-302253 cache add registry.k8s.io/pause:latest: (1.176324476s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-302253 /tmp/TestFunctionalserialCacheCmdcacheadd_local1376215104/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 cache add minikube-local-cache-test:functional-302253
functional_test.go:1085: (dbg) Done: out/minikube-linux-arm64 -p functional-302253 cache add minikube-local-cache-test:functional-302253: (1.007333419s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 cache delete minikube-local-cache-test:functional-302253
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-302253
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (305.812822ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-302253 cache reload: (1.276319943s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 kubectl -- --context functional-302253 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-302253 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (58.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-302253 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0731 10:45:12.389204 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 10:45:12.394955 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 10:45:12.405153 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 10:45:12.425355 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 10:45:12.465579 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 10:45:12.545865 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 10:45:12.706203 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 10:45:13.026698 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 10:45:13.667555 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 10:45:14.947964 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 10:45:17.509823 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 10:45:22.630029 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 10:45:32.870230 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-302253 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (58.085040897s)
functional_test.go:757: restart took 58.085182475s for "functional-302253" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (58.09s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-302253 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-302253 logs: (1.734349692s)
--- PASS: TestFunctional/serial/LogsCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 logs --file /tmp/TestFunctionalserialLogsFileCmd494032209/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-302253 logs --file /tmp/TestFunctionalserialLogsFileCmd494032209/001/logs.txt: (1.732475461s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.69s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-302253 apply -f testdata/invalidsvc.yaml
E0731 10:45:53.350469 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-302253
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-302253: exit status 115 (396.168537ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32030 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-302253 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.69s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 config get cpus: exit status 14 (68.678496ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 config get cpus: exit status 14 (75.18551ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-302253 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-302253 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3651785: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-302253 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-302253 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (277.908914ms)

                                                
                                                
-- stdout --
	* [functional-302253] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:46:37.547513 3651280 out.go:296] Setting OutFile to fd 1 ...
	I0731 10:46:37.547670 3651280 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:46:37.547676 3651280 out.go:309] Setting ErrFile to fd 2...
	I0731 10:46:37.547681 3651280 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:46:37.547951 3651280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
	I0731 10:46:37.548293 3651280 out.go:303] Setting JSON to false
	I0731 10:46:37.549340 3651280 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":66545,"bootTime":1690733853,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 10:46:37.549401 3651280 start.go:138] virtualization:  
	I0731 10:46:37.552836 3651280 out.go:177] * [functional-302253] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 10:46:37.554415 3651280 out.go:177]   - MINIKUBE_LOCATION=16969
	I0731 10:46:37.554519 3651280 notify.go:220] Checking for updates...
	I0731 10:46:37.556374 3651280 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:46:37.559320 3651280 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	I0731 10:46:37.561174 3651280 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	I0731 10:46:37.563216 3651280 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 10:46:37.565365 3651280 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:46:37.567972 3651280 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0731 10:46:37.568488 3651280 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 10:46:37.634527 3651280 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 10:46:37.634617 3651280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:46:37.745619 3651280 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-07-31 10:46:37.734245166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 10:46:37.745724 3651280 docker.go:294] overlay module found
	I0731 10:46:37.747850 3651280 out.go:177] * Using the docker driver based on existing profile
	I0731 10:46:37.750583 3651280 start.go:298] selected driver: docker
	I0731 10:46:37.750717 3651280 start.go:898] validating driver "docker" against &{Name:functional-302253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-302253 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 10:46:37.750832 3651280 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:46:37.753528 3651280 out.go:177] 
	W0731 10:46:37.755744 3651280 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 10:46:37.757921 3651280 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-302253 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-302253 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-302253 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (243.856767ms)

                                                
                                                
-- stdout --
	* [functional-302253] minikube v1.31.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:46:38.251423 3651476 out.go:296] Setting OutFile to fd 1 ...
	I0731 10:46:38.252454 3651476 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:46:38.252535 3651476 out.go:309] Setting ErrFile to fd 2...
	I0731 10:46:38.252557 3651476 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:46:38.253154 3651476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
	I0731 10:46:38.253749 3651476 out.go:303] Setting JSON to false
	I0731 10:46:38.257956 3651476 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":66546,"bootTime":1690733853,"procs":286,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 10:46:38.258033 3651476 start.go:138] virtualization:  
	I0731 10:46:38.260287 3651476 out.go:177] * [functional-302253] minikube v1.31.1 sur Ubuntu 20.04 (arm64)
	I0731 10:46:38.262341 3651476 out.go:177]   - MINIKUBE_LOCATION=16969
	I0731 10:46:38.263839 3651476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:46:38.262611 3651476 notify.go:220] Checking for updates...
	I0731 10:46:38.267605 3651476 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	I0731 10:46:38.269076 3651476 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	I0731 10:46:38.270783 3651476 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 10:46:38.272579 3651476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:46:38.274582 3651476 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0731 10:46:38.275139 3651476 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 10:46:38.298583 3651476 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 10:46:38.298677 3651476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:46:38.412364 3651476 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-07-31 10:46:38.400779584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 10:46:38.412461 3651476 docker.go:294] overlay module found
	I0731 10:46:38.414521 3651476 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0731 10:46:38.416104 3651476 start.go:298] selected driver: docker
	I0731 10:46:38.416125 3651476 start.go:898] validating driver "docker" against &{Name:functional-302253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-302253 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 10:46:38.416230 3651476 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:46:38.418698 3651476 out.go:177] 
	W0731 10:46:38.420523 3651476 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 10:46:38.422339 3651476 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-302253 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-302253 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-cdkrj" [5d5f2962-d0ef-475e-bfea-c1ebc2e1c26a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-cdkrj" [5d5f2962-d0ef-475e-bfea-c1ebc2e1c26a] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.013956734s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32318
functional_test.go:1674: http://192.168.49.2:32318: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58d66798bb-cdkrj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32318
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c253e632-8186-4715-b627-bc29ce834a13] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012701286s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-302253 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-302253 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-302253 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-302253 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [da68b582-90f3-4fa2-ac69-17761b452d4e] Pending
helpers_test.go:344: "sp-pod" [da68b582-90f3-4fa2-ac69-17761b452d4e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [da68b582-90f3-4fa2-ac69-17761b452d4e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.018277412s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-302253 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-302253 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-302253 delete -f testdata/storage-provisioner/pod.yaml: (1.814529132s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-302253 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [811a4b86-087c-4881-93f0-0f2b97986901] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [811a4b86-087c-4881-93f0-0f2b97986901] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.013990879s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-302253 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.22s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh -n functional-302253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 cp functional-302253:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3044885781/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh -n functional-302253 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/3621403/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "sudo cat /etc/test/nested/copy/3621403/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/3621403.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "sudo cat /etc/ssl/certs/3621403.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/3621403.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "sudo cat /usr/share/ca-certificates/3621403.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/36214032.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "sudo cat /etc/ssl/certs/36214032.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/36214032.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "sudo cat /usr/share/ca-certificates/36214032.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-302253 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh "sudo systemctl is-active docker": exit status 1 (388.343279ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh "sudo systemctl is-active crio": exit status 1 (421.147543ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-302253 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-302253
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-302253 image ls --format short --alsologtostderr:
I0731 10:46:47.877993 3652438 out.go:296] Setting OutFile to fd 1 ...
I0731 10:46:47.878127 3652438 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 10:46:47.878135 3652438 out.go:309] Setting ErrFile to fd 2...
I0731 10:46:47.878141 3652438 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 10:46:47.878502 3652438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
I0731 10:46:47.879206 3652438 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0731 10:46:47.879324 3652438 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0731 10:46:47.879848 3652438 cli_runner.go:164] Run: docker container inspect functional-302253 --format={{.State.Status}}
I0731 10:46:47.897493 3652438 ssh_runner.go:195] Run: systemctl --version
I0731 10:46:47.897548 3652438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-302253
I0731 10:46:47.914301 3652438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35353 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/functional-302253/id_rsa Username:docker}
I0731 10:46:48.010056 3652438 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-302253 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | alpine             | sha256:66bf2c | 16.4MB |
| docker.io/library/nginx                     | latest             | sha256:ff78c7 | 67.3MB |
| localhost/my-image                          | functional-302253  | sha256:715702 | 831kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.7-0            | sha256:24bc64 | 80.7MB |
| registry.k8s.io/kube-apiserver              | v1.27.3            | sha256:39dfb0 | 30.4MB |
| registry.k8s.io/kube-scheduler              | v1.27.3            | sha256:bcb9e5 | 16.5MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/minikube-local-cache-test | functional-302253  | sha256:17a87b | 1.01kB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/kube-proxy                  | v1.27.3            | sha256:fb73e9 | 21.4MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | sha256:b18bf7 | 25.3MB |
| registry.k8s.io/kube-controller-manager     | v1.27.3            | sha256:ab3683 | 28.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-302253 image ls --format table --alsologtostderr:
I0731 10:46:52.479331 3653157 out.go:296] Setting OutFile to fd 1 ...
I0731 10:46:52.479590 3653157 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 10:46:52.479617 3653157 out.go:309] Setting ErrFile to fd 2...
I0731 10:46:52.479639 3653157 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 10:46:52.479946 3653157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
I0731 10:46:52.480595 3653157 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0731 10:46:52.480772 3653157 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0731 10:46:52.481282 3653157 cli_runner.go:164] Run: docker container inspect functional-302253 --format={{.State.Status}}
I0731 10:46:52.501393 3653157 ssh_runner.go:195] Run: systemctl --version
I0731 10:46:52.501452 3653157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-302253
I0731 10:46:52.523750 3653157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35353 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/functional-302253/id_rsa Username:docker}
I0731 10:46:52.619320 3653157 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-302253 image ls --format json --alsologtostderr:
[{"id":"sha256:17a87bf59be8f0902bda4eae58f2c6ce80c82711311b54d09b637578d6c63e03","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-302253"],"size":"1006"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:715702e2cb7bc7bc28eb15e7c13a9fdaaaa9a8b6ce4c9576bb0b1d8d13544af7","repoDigests":[],"repoTags":["localhost/my-image:functional-302253"],"size":"830918"},{"id":"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"80665728"},{"id":"sha256:bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540","repoDigests":["registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db321
8a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"16549864"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"25334607"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"],"repoTag
s":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"28214546"},{"id":"sha256:fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a","repoDigests":["registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"21369271"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"30386419"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["re
gistry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea","repoDigests":["docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6"],"repoTags":["docker.io/library/nginx:alpine"],"size":"16359946"},{"id":"sha256:ff78c7a65ec2b1fb09f58b27b0dd022ac1f4e16b9bcfe1cbdc18c36f2e0e1842","repoDigests":["docker.io/library/nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca"],"repoTags":["docker.io/library/nginx:latest"],"size":"67306163"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minik
ube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-302253 image ls --format json --alsologtostderr:
I0731 10:46:52.055910 3652983 out.go:296] Setting OutFile to fd 1 ...
I0731 10:46:52.056055 3652983 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 10:46:52.056065 3652983 out.go:309] Setting ErrFile to fd 2...
I0731 10:46:52.056071 3652983 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 10:46:52.056355 3652983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
I0731 10:46:52.057058 3652983 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0731 10:46:52.057249 3652983 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0731 10:46:52.057753 3652983 cli_runner.go:164] Run: docker container inspect functional-302253 --format={{.State.Status}}
I0731 10:46:52.115223 3652983 ssh_runner.go:195] Run: systemctl --version
I0731 10:46:52.115275 3652983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-302253
I0731 10:46:52.152727 3652983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35353 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/functional-302253/id_rsa Username:docker}
I0731 10:46:52.285996 3652983 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-302253 image ls --format yaml --alsologtostderr:
- id: sha256:ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "28214546"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ff78c7a65ec2b1fb09f58b27b0dd022ac1f4e16b9bcfe1cbdc18c36f2e0e1842
repoDigests:
- docker.io/library/nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca
repoTags:
- docker.io/library/nginx:latest
size: "67306163"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "80665728"
- id: sha256:bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "16549864"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea
repoDigests:
- docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6
repoTags:
- docker.io/library/nginx:alpine
size: "16359946"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "21369271"
- id: sha256:17a87bf59be8f0902bda4eae58f2c6ce80c82711311b54d09b637578d6c63e03
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-302253
size: "1006"
- id: sha256:b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "25334607"
- id: sha256:39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "30386419"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-302253 image ls --format yaml --alsologtostderr:
I0731 10:46:48.114923 3652463 out.go:296] Setting OutFile to fd 1 ...
I0731 10:46:48.115119 3652463 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 10:46:48.115130 3652463 out.go:309] Setting ErrFile to fd 2...
I0731 10:46:48.115137 3652463 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 10:46:48.115429 3652463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
I0731 10:46:48.116086 3652463 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0731 10:46:48.116251 3652463 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0731 10:46:48.116764 3652463 cli_runner.go:164] Run: docker container inspect functional-302253 --format={{.State.Status}}
I0731 10:46:48.134809 3652463 ssh_runner.go:195] Run: systemctl --version
I0731 10:46:48.134868 3652463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-302253
I0731 10:46:48.152888 3652463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35353 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/functional-302253/id_rsa Username:docker}
I0731 10:46:48.246581 3652463 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh pgrep buildkitd: exit status 1 (269.179658ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image build -t localhost/my-image:functional-302253 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-302253 image build -t localhost/my-image:functional-302253 testdata/build --alsologtostderr: (2.745879686s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-302253 image build -t localhost/my-image:functional-302253 testdata/build --alsologtostderr:
I0731 10:46:48.616702 3652538 out.go:296] Setting OutFile to fd 1 ...
I0731 10:46:48.618385 3652538 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 10:46:48.618402 3652538 out.go:309] Setting ErrFile to fd 2...
I0731 10:46:48.618408 3652538 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 10:46:48.618785 3652538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
I0731 10:46:48.619767 3652538 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0731 10:46:48.621211 3652538 config.go:182] Loaded profile config "functional-302253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0731 10:46:48.621775 3652538 cli_runner.go:164] Run: docker container inspect functional-302253 --format={{.State.Status}}
I0731 10:46:48.639755 3652538 ssh_runner.go:195] Run: systemctl --version
I0731 10:46:48.639808 3652538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-302253
I0731 10:46:48.657056 3652538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35353 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/functional-302253/id_rsa Username:docker}
I0731 10:46:48.748240 3652538 build_images.go:151] Building image from path: /tmp/build.3338335031.tar
I0731 10:46:48.748311 3652538 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 10:46:48.759032 3652538 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3338335031.tar
I0731 10:46:48.763210 3652538 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3338335031.tar: stat -c "%s %y" /var/lib/minikube/build/build.3338335031.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3338335031.tar': No such file or directory
I0731 10:46:48.763241 3652538 ssh_runner.go:362] scp /tmp/build.3338335031.tar --> /var/lib/minikube/build/build.3338335031.tar (3072 bytes)
I0731 10:46:48.791298 3652538 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3338335031
I0731 10:46:48.802654 3652538 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3338335031 -xf /var/lib/minikube/build/build.3338335031.tar
I0731 10:46:48.813318 3652538 containerd.go:378] Building image: /var/lib/minikube/build/build.3338335031
I0731 10:46:48.813382 3652538 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3338335031 --local dockerfile=/var/lib/minikube/build/build.3338335031 --output type=image,name=localhost/my-image:functional-302253
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.9s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:a93ae7a25e0857c01c1e02c2266b359f4c36c321ff00ca7a9748bbbffdb132db 0.0s done
#8 exporting config sha256:715702e2cb7bc7bc28eb15e7c13a9fdaaaa9a8b6ce4c9576bb0b1d8d13544af7 0.0s done
#8 naming to localhost/my-image:functional-302253 done
#8 DONE 0.1s
I0731 10:46:51.283051 3652538 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3338335031 --local dockerfile=/var/lib/minikube/build/build.3338335031 --output type=image,name=localhost/my-image:functional-302253: (2.469634376s)
I0731 10:46:51.283126 3652538 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3338335031
I0731 10:46:51.294471 3652538 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3338335031.tar
I0731 10:46:51.305797 3652538 build_images.go:207] Built localhost/my-image:functional-302253 from /tmp/build.3338335031.tar
I0731 10:46:51.305827 3652538 build_images.go:123] succeeded building to: functional-302253
I0731 10:46:51.305832 3652538 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.784450187s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-302253
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-302253 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-302253 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-9k2zx" [ce3b8d56-6d1b-4fd1-a8e7-64d2a32d26a4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-9k2zx" [ce3b8d56-6d1b-4fd1-a8e7-64d2a32d26a4] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.046105413s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 service list -o json
functional_test.go:1493: Took "408.569347ms" to run "out/minikube-linux-arm64 -p functional-302253 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31311
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31311
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-302253 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-302253 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-302253 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3649274: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-302253 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image rm gcr.io/google-containers/addon-resizer:functional-302253 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-302253 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-302253 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5af7f658-8096-4929-9ff9-7f5db9e055ce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5af7f658-8096-4929-9ff9-7f5db9e055ce] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.012633745s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-302253
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 image save --daemon gcr.io/google-containers/addon-resizer:functional-302253 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-302253
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-302253 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.186.227 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-302253 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "338.783036ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "55.090241ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "336.266714ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "55.490159ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-302253 /tmp/TestFunctionalparallelMountCmdany-port3684084039/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1690800392235595647" to /tmp/TestFunctionalparallelMountCmdany-port3684084039/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1690800392235595647" to /tmp/TestFunctionalparallelMountCmdany-port3684084039/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1690800392235595647" to /tmp/TestFunctionalparallelMountCmdany-port3684084039/001/test-1690800392235595647
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.253256ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 10:46 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 10:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 10:46 test-1690800392235595647
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh cat /mount-9p/test-1690800392235595647
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-302253 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a75e7e0d-ab39-49c5-aa9c-65b7c5a1f86a] Pending
E0731 10:46:34.311479 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [a75e7e0d-ab39-49c5-aa9c-65b7c5a1f86a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a75e7e0d-ab39-49c5-aa9c-65b7c5a1f86a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a75e7e0d-ab39-49c5-aa9c-65b7c5a1f86a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.013807905s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-302253 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-302253 /tmp/TestFunctionalparallelMountCmdany-port3684084039/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-302253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2644373783/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-302253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2644373783/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-302253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2644373783/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-302253 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-302253 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-302253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2644373783/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-302253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2644373783/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-302253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2644373783/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-302253
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-302253
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-302253
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (82.62s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-947999 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0731 10:47:56.232537 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-947999 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m22.619905248s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (82.62s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-947999 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-947999 addons enable ingress --alsologtostderr -v=5: (9.960454457s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.96s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-947999 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.44s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-051818 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0731 10:50:12.387857 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-051818 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (59.44289381s)
--- PASS: TestJSONOutput/start/Command (59.44s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-051818 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-051818 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-051818 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-051818 --output=json --user=testUser: (5.777606884s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-617189 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-617189 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.469368ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0e91264a-89c6-4119-8f0e-dc2ca69fa607","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-617189] minikube v1.31.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a9413d4-db5a-4121-9861-8c593e1c7fbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16969"}}
	{"specversion":"1.0","id":"fdb4770a-4b54-4e81-adab-118bea61f271","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"108e38ed-a42f-4ce5-bef0-44e009c6cf72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig"}}
	{"specversion":"1.0","id":"58906e26-b083-499c-9632-c241add6513f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube"}}
	{"specversion":"1.0","id":"0cae63a0-9f66-4f5c-ab2e-ffdab0f95738","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bfb2213f-4db0-4419-be6d-291bffd03395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f630c733-1fee-4408-882b-8deea9c33630","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-617189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-617189
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-061564 --network=
E0731 10:50:40.073166 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 10:50:59.297513 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 10:50:59.302725 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 10:50:59.312927 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 10:50:59.333155 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 10:50:59.373387 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 10:50:59.454169 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 10:50:59.614474 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 10:50:59.934955 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 10:51:00.575383 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 10:51:01.855589 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 10:51:04.417296 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 10:51:09.538025 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-061564 --network=: (39.541157957s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-061564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-061564
E0731 10:51:19.779036 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-061564: (2.131622504s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-450186 --network=bridge
E0731 10:51:40.259249 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-450186 --network=bridge: (30.816615403s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-450186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-450186
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-450186: (1.971960495s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.81s)

                                                
                                    
x
+
TestKicExistingNetwork (36.52s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-097229 --network=existing-network
E0731 10:52:21.219425 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-097229 --network=existing-network: (34.367678478s)
helpers_test.go:175: Cleaning up "existing-network-097229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-097229
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-097229: (2.005180704s)
--- PASS: TestKicExistingNetwork (36.52s)

                                                
                                    
x
+
TestKicCustomSubnet (38.57s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-355488 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-355488 --subnet=192.168.60.0/24: (36.527438354s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-355488 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-355488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-355488
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-355488: (2.019626799s)
--- PASS: TestKicCustomSubnet (38.57s)

                                                
                                    
x
+
TestKicStaticIP (33.88s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-027570 --static-ip=192.168.200.200
E0731 10:53:28.952980 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:53:28.958228 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:53:28.968450 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:53:28.988682 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:53:29.028918 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:53:29.109188 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:53:29.269550 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:53:29.590240 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:53:30.230384 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:53:31.510813 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:53:34.072569 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:53:39.193201 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-027570 --static-ip=192.168.200.200: (31.550557277s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-027570 ip
helpers_test.go:175: Cleaning up "static-ip-027570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-027570
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-027570: (2.161179052s)
--- PASS: TestKicStaticIP (33.88s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-346391 --driver=docker  --container-runtime=containerd
E0731 10:53:43.140453 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 10:53:49.434003 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:54:09.914210 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-346391 --driver=docker  --container-runtime=containerd: (30.437470866s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-349180 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-349180 --driver=docker  --container-runtime=containerd: (32.900412229s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-346391
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-349180
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-349180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-349180
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-349180: (1.9231177s)
helpers_test.go:175: Cleaning up "first-346391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-346391
E0731 10:54:50.874406 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-346391: (2.227975997s)
--- PASS: TestMinikubeProfile (68.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-078013 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-078013 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.745328545s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-078013 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-079958 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-079958 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.301055369s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-079958 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-078013 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-078013 --alsologtostderr -v=5: (1.647475766s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-079958 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-079958
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-079958: (1.230661673s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.9s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-079958
E0731 10:55:12.387545 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-079958: (6.903595682s)
--- PASS: TestMountStart/serial/RestartStopped (7.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-079958 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-918316 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0731 10:55:59.297884 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 10:56:12.795520 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:56:26.981374 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-918316 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m49.067028274s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-918316 -- rollout status deployment/busybox: (2.126707591s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- exec busybox-67b7f59bb-7bkpn -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- exec busybox-67b7f59bb-sxhfm -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-918316 -- exec busybox-67b7f59bb-sxhfm -- nslookup kubernetes.io: (5.224999111s)
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- exec busybox-67b7f59bb-7bkpn -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- exec busybox-67b7f59bb-sxhfm -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- exec busybox-67b7f59bb-7bkpn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- exec busybox-67b7f59bb-sxhfm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- exec busybox-67b7f59bb-7bkpn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- exec busybox-67b7f59bb-7bkpn -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- exec busybox-67b7f59bb-sxhfm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-918316 -- exec busybox-67b7f59bb-sxhfm -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-918316 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-918316 -v 3 --alsologtostderr: (16.894234385s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.58s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 cp testdata/cp-test.txt multinode-918316:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 cp multinode-918316:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile415881988/001/cp-test_multinode-918316.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 cp multinode-918316:/home/docker/cp-test.txt multinode-918316-m02:/home/docker/cp-test_multinode-918316_multinode-918316-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316-m02 "sudo cat /home/docker/cp-test_multinode-918316_multinode-918316-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 cp multinode-918316:/home/docker/cp-test.txt multinode-918316-m03:/home/docker/cp-test_multinode-918316_multinode-918316-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316-m03 "sudo cat /home/docker/cp-test_multinode-918316_multinode-918316-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 cp testdata/cp-test.txt multinode-918316-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 cp multinode-918316-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile415881988/001/cp-test_multinode-918316-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 cp multinode-918316-m02:/home/docker/cp-test.txt multinode-918316:/home/docker/cp-test_multinode-918316-m02_multinode-918316.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316 "sudo cat /home/docker/cp-test_multinode-918316-m02_multinode-918316.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 cp multinode-918316-m02:/home/docker/cp-test.txt multinode-918316-m03:/home/docker/cp-test_multinode-918316-m02_multinode-918316-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316-m03 "sudo cat /home/docker/cp-test_multinode-918316-m02_multinode-918316-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 cp testdata/cp-test.txt multinode-918316-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 cp multinode-918316-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile415881988/001/cp-test_multinode-918316-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 cp multinode-918316-m03:/home/docker/cp-test.txt multinode-918316:/home/docker/cp-test_multinode-918316-m03_multinode-918316.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316 "sudo cat /home/docker/cp-test_multinode-918316-m03_multinode-918316.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 cp multinode-918316-m03:/home/docker/cp-test.txt multinode-918316-m02:/home/docker/cp-test_multinode-918316-m03_multinode-918316-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 ssh -n multinode-918316-m02 "sudo cat /home/docker/cp-test_multinode-918316-m03_multinode-918316-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-918316 node stop m03: (1.244124469s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-918316 status: exit status 7 (538.235219ms)

                                                
                                                
-- stdout --
	multinode-918316
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-918316-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-918316-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-918316 status --alsologtostderr: exit status 7 (549.799574ms)

                                                
                                                
-- stdout --
	multinode-918316
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-918316-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-918316-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:57:51.213939 3700430 out.go:296] Setting OutFile to fd 1 ...
	I0731 10:57:51.214207 3700430 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:57:51.214213 3700430 out.go:309] Setting ErrFile to fd 2...
	I0731 10:57:51.214218 3700430 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:57:51.214593 3700430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
	I0731 10:57:51.214807 3700430 out.go:303] Setting JSON to false
	I0731 10:57:51.214883 3700430 mustload.go:65] Loading cluster: multinode-918316
	I0731 10:57:51.216067 3700430 notify.go:220] Checking for updates...
	I0731 10:57:51.216423 3700430 config.go:182] Loaded profile config "multinode-918316": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0731 10:57:51.216465 3700430 status.go:255] checking status of multinode-918316 ...
	I0731 10:57:51.217055 3700430 cli_runner.go:164] Run: docker container inspect multinode-918316 --format={{.State.Status}}
	I0731 10:57:51.235438 3700430 status.go:330] multinode-918316 host status = "Running" (err=<nil>)
	I0731 10:57:51.235478 3700430 host.go:66] Checking if "multinode-918316" exists ...
	I0731 10:57:51.235780 3700430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-918316
	I0731 10:57:51.254730 3700430 host.go:66] Checking if "multinode-918316" exists ...
	I0731 10:57:51.255042 3700430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 10:57:51.255111 3700430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-918316
	I0731 10:57:51.284745 3700430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35418 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/multinode-918316/id_rsa Username:docker}
	I0731 10:57:51.379436 3700430 ssh_runner.go:195] Run: systemctl --version
	I0731 10:57:51.384906 3700430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 10:57:51.398342 3700430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:57:51.474404 3700430 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-07-31 10:57:51.458902092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 10:57:51.474983 3700430 kubeconfig.go:92] found "multinode-918316" server: "https://192.168.58.2:8443"
	I0731 10:57:51.475004 3700430 api_server.go:166] Checking apiserver status ...
	I0731 10:57:51.475045 3700430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:57:51.489355 3700430 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1271/cgroup
	I0731 10:57:51.500313 3700430 api_server.go:182] apiserver freezer: "6:freezer:/docker/d4599a9ad35bf1a043dd14ff2f5a8d1db98be262c74dd88405ff10e8bcf7e473/kubepods/burstable/pod30d6319355810d0581f31af00f35c5c3/ee8aa14c764454b7a64ac74115bd75b003542be112d3ad54a23c49bd212ea8a3"
	I0731 10:57:51.500401 3700430 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d4599a9ad35bf1a043dd14ff2f5a8d1db98be262c74dd88405ff10e8bcf7e473/kubepods/burstable/pod30d6319355810d0581f31af00f35c5c3/ee8aa14c764454b7a64ac74115bd75b003542be112d3ad54a23c49bd212ea8a3/freezer.state
	I0731 10:57:51.510846 3700430 api_server.go:204] freezer state: "THAWED"
	I0731 10:57:51.510877 3700430 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0731 10:57:51.520653 3700430 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0731 10:57:51.520678 3700430 status.go:421] multinode-918316 apiserver status = Running (err=<nil>)
	I0731 10:57:51.520690 3700430 status.go:257] multinode-918316 status: &{Name:multinode-918316 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 10:57:51.520709 3700430 status.go:255] checking status of multinode-918316-m02 ...
	I0731 10:57:51.521026 3700430 cli_runner.go:164] Run: docker container inspect multinode-918316-m02 --format={{.State.Status}}
	I0731 10:57:51.540021 3700430 status.go:330] multinode-918316-m02 host status = "Running" (err=<nil>)
	I0731 10:57:51.540045 3700430 host.go:66] Checking if "multinode-918316-m02" exists ...
	I0731 10:57:51.540335 3700430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-918316-m02
	I0731 10:57:51.557869 3700430 host.go:66] Checking if "multinode-918316-m02" exists ...
	I0731 10:57:51.558164 3700430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 10:57:51.558208 3700430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-918316-m02
	I0731 10:57:51.575363 3700430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35423 SSHKeyPath:/home/jenkins/minikube-integration/16969-3616075/.minikube/machines/multinode-918316-m02/id_rsa Username:docker}
	I0731 10:57:51.667032 3700430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 10:57:51.679814 3700430 status.go:257] multinode-918316-m02 status: &{Name:multinode-918316-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0731 10:57:51.679843 3700430 status.go:255] checking status of multinode-918316-m03 ...
	I0731 10:57:51.680141 3700430 cli_runner.go:164] Run: docker container inspect multinode-918316-m03 --format={{.State.Status}}
	I0731 10:57:51.701139 3700430 status.go:330] multinode-918316-m03 host status = "Stopped" (err=<nil>)
	I0731 10:57:51.701157 3700430 status.go:343] host is not running, skipping remaining checks
	I0731 10:57:51.701164 3700430 status.go:257] multinode-918316-m03 status: &{Name:multinode-918316-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-918316 node start m03 --alsologtostderr: (10.954001035s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (142.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-918316
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-918316
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-918316: (25.039028214s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-918316 --wait=true -v=8 --alsologtostderr
E0731 10:58:28.952280 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 10:58:56.635715 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 11:00:12.387858 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-918316 --wait=true -v=8 --alsologtostderr: (1m57.009105529s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-918316
--- PASS: TestMultiNode/serial/RestartKeepsNodes (142.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-918316 node delete m03: (4.266501365s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-918316 stop: (23.841150625s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-918316 status: exit status 7 (90.534571ms)

                                                
                                                
-- stdout --
	multinode-918316
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-918316-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-918316 status --alsologtostderr: exit status 7 (96.49601ms)

                                                
                                                
-- stdout --
	multinode-918316
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-918316-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:00:54.646702 3709011 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:00:54.646812 3709011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:00:54.646821 3709011 out.go:309] Setting ErrFile to fd 2...
	I0731 11:00:54.646826 3709011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:00:54.647070 3709011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
	I0731 11:00:54.647229 3709011 out.go:303] Setting JSON to false
	I0731 11:00:54.647331 3709011 mustload.go:65] Loading cluster: multinode-918316
	I0731 11:00:54.647402 3709011 notify.go:220] Checking for updates...
	I0731 11:00:54.648316 3709011 config.go:182] Loaded profile config "multinode-918316": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0731 11:00:54.648340 3709011 status.go:255] checking status of multinode-918316 ...
	I0731 11:00:54.649232 3709011 cli_runner.go:164] Run: docker container inspect multinode-918316 --format={{.State.Status}}
	I0731 11:00:54.675638 3709011 status.go:330] multinode-918316 host status = "Stopped" (err=<nil>)
	I0731 11:00:54.675704 3709011 status.go:343] host is not running, skipping remaining checks
	I0731 11:00:54.675726 3709011 status.go:257] multinode-918316 status: &{Name:multinode-918316 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 11:00:54.675789 3709011 status.go:255] checking status of multinode-918316-m02 ...
	I0731 11:00:54.676143 3709011 cli_runner.go:164] Run: docker container inspect multinode-918316-m02 --format={{.State.Status}}
	I0731 11:00:54.692830 3709011 status.go:330] multinode-918316-m02 host status = "Stopped" (err=<nil>)
	I0731 11:00:54.692847 3709011 status.go:343] host is not running, skipping remaining checks
	I0731 11:00:54.692854 3709011 status.go:257] multinode-918316-m02 status: &{Name:multinode-918316-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (97.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-918316 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0731 11:00:59.297894 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 11:01:35.433875 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-918316 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m36.647330879s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-918316 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (97.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-918316
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-918316-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-918316-m02 --driver=docker  --container-runtime=containerd: exit status 14 (94.755334ms)

                                                
                                                
-- stdout --
	* [multinode-918316-m02] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-918316-m02' is duplicated with machine name 'multinode-918316-m02' in profile 'multinode-918316'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-918316-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-918316-m03 --driver=docker  --container-runtime=containerd: (40.799625917s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-918316
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-918316: exit status 80 (336.789079ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-918316
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-918316-m03 already exists in multinode-918316-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-918316-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-918316-m03: (1.934493444s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.22s)

                                                
                                    
x
+
TestPreload (173.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-766990 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0731 11:03:28.952041 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-766990 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m13.421528868s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-766990 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-766990 image pull gcr.io/k8s-minikube/busybox: (1.372461967s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-766990
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-766990: (12.024930698s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-766990 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0731 11:05:12.388010 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 11:05:59.297565 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-766990 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m24.462278336s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-766990 image list
helpers_test.go:175: Cleaning up "test-preload-766990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-766990
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-766990: (2.352905453s)
--- PASS: TestPreload (173.89s)

                                                
                                    
x
+
TestScheduledStopUnix (120.29s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-350387 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-350387 --memory=2048 --driver=docker  --container-runtime=containerd: (43.591726123s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-350387 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-350387 -n scheduled-stop-350387
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-350387 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-350387 --cancel-scheduled
E0731 11:07:22.341623 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-350387 -n scheduled-stop-350387
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-350387
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-350387 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-350387
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-350387: exit status 7 (70.256308ms)

                                                
                                                
-- stdout --
	scheduled-stop-350387
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-350387 -n scheduled-stop-350387
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-350387 -n scheduled-stop-350387: exit status 7 (79.710627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-350387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-350387
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-350387: (5.062224799s)
--- PASS: TestScheduledStopUnix (120.29s)

                                                
                                    
x
+
TestInsufficientStorage (12.57s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-587297 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-587297 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.111117246s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"21992aac-881e-4a23-a47d-a23f09c8887f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-587297] minikube v1.31.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca9b8889-e5f0-4892-a986-d2be401f9beb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16969"}}
	{"specversion":"1.0","id":"557f2ce1-5402-4ed7-8358-0aae7b1ad37e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f87a8991-ecc3-49d1-9779-b11e9a426188","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig"}}
	{"specversion":"1.0","id":"1d7361fa-79ea-4631-9915-89a35788c881","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube"}}
	{"specversion":"1.0","id":"72f5e2df-05c7-4a00-889f-d5956dfbaa7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9e240a11-4684-4993-a6fd-c74f14e3e70e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9750d474-2cee-4d35-8b73-04cff5bfe679","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a45891a2-5fe3-4863-a877-e2a0da15d839","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8277986b-56dc-4bb1-b617-18f87fa9e33b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"53c2b6f2-6025-4182-9d1c-098e0f4b6052","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"25734e6d-7e3d-4f6b-8523-67fe18eae152","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-587297 in cluster insufficient-storage-587297","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"413b86df-ef73-4a28-97b7-4d5e62f866f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f6cfe11-06b5-4615-b7e5-e4291d2cce0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e14fe3e8-d022-4250-84ad-5a3de5839c15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-587297 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-587297 --output=json --layout=cluster: exit status 7 (296.872441ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-587297","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-587297","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 11:08:23.813400 3726258 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-587297" does not appear in /home/jenkins/minikube-integration/16969-3616075/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-587297 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-587297 --output=json --layout=cluster: exit status 7 (294.993207ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-587297","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-587297","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 11:08:24.107933 3726311 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-587297" does not appear in /home/jenkins/minikube-integration/16969-3616075/kubeconfig
	E0731 11:08:24.120140 3726311 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/insufficient-storage-587297/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-587297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-587297
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-587297: (1.86530093s)
--- PASS: TestInsufficientStorage (12.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (121.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.22.0.2959262777.exe start -p running-upgrade-673252 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.22.0.2959262777.exe start -p running-upgrade-673252 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m20.372442961s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-673252 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:142: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-673252 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.545079543s)
helpers_test.go:175: Cleaning up "running-upgrade-673252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-673252
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-673252: (2.880090551s)
--- PASS: TestRunningBinaryUpgrade (121.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (153.71s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-700943 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0731 11:10:12.387721 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 11:10:59.297600 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-700943 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m12.794349292s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-700943
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-700943: (1.343938094s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-700943 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-700943 status --format={{.Host}}: exit status 7 (83.66482ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-700943 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-700943 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (51.716376748s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-700943 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-700943 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-700943 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (79.812651ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-700943] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-700943
	    minikube start -p kubernetes-upgrade-700943 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7009432 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-700943 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-700943 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-700943 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.757485411s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-700943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-700943
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-700943: (2.811630883s)
--- PASS: TestKubernetesUpgrade (153.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-810140 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-810140 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (97.116551ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-810140] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestPause/serial/Start (70.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-797340 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-797340 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m10.33908094s)
--- PASS: TestPause/serial/Start (70.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-810140 --driver=docker  --container-runtime=containerd
E0731 11:08:28.951895 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-810140 --driver=docker  --container-runtime=containerd: (44.528526857s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-810140 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-810140 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-810140 --no-kubernetes --driver=docker  --container-runtime=containerd: (20.250234889s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-810140 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-810140 status -o json: exit status 2 (327.541414ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-810140","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-810140
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-810140: (1.910558275s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-810140 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-810140 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.634463395s)
--- PASS: TestNoKubernetes/serial/Start (6.63s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (14.87s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-797340 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-797340 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (14.851012193s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (14.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-810140 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-810140 "sudo systemctl is-active --quiet service kubelet": exit status 1 (366.622132ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-810140
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-810140: (1.221429452s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-810140 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-810140 --driver=docker  --container-runtime=containerd: (6.687767028s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-810140 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-810140 "sudo systemctl is-active --quiet service kubelet": exit status 1 (300.080358ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-797340 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-797340 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-797340 --output=json --layout=cluster: exit status 2 (444.168353ms)

                                                
                                                
-- stdout --
	{"Name":"pause-797340","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-797340","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-797340 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.89s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.08s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-797340 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-797340 --alsologtostderr -v=5: (1.075771832s)
--- PASS: TestPause/serial/PauseAgain (1.08s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-797340 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-797340 --alsologtostderr -v=5: (3.121804854s)
--- PASS: TestPause/serial/DeletePaused (3.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.22s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-797340
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-797340: exit status 1 (23.976432ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-797340: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (158.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.22.0.1805435450.exe start -p stopped-upgrade-585335 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.22.0.1805435450.exe start -p stopped-upgrade-585335 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m26.324923938s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.22.0.1805435450.exe -p stopped-upgrade-585335 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.22.0.1805435450.exe -p stopped-upgrade-585335 stop: (20.368584715s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-585335 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:210: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-585335 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (52.248880308s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (158.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-585335
E0731 11:15:12.387847 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
version_upgrade_test.go:218: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-585335: (1.543271798s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-076217 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-076217 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (253.852182ms)

                                                
                                                
-- stdout --
	* [false-076217] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:15:37.114652 3759274 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:15:37.114864 3759274 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:15:37.114888 3759274 out.go:309] Setting ErrFile to fd 2...
	I0731 11:15:37.114909 3759274 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:15:37.115202 3759274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16969-3616075/.minikube/bin
	I0731 11:15:37.115680 3759274 out.go:303] Setting JSON to false
	I0731 11:15:37.116762 3759274 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":68284,"bootTime":1690733853,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0731 11:15:37.116842 3759274 start.go:138] virtualization:  
	I0731 11:15:37.119530 3759274 out.go:177] * [false-076217] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 11:15:37.121605 3759274 out.go:177]   - MINIKUBE_LOCATION=16969
	I0731 11:15:37.121698 3759274 notify.go:220] Checking for updates...
	I0731 11:15:37.124490 3759274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:15:37.126435 3759274 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16969-3616075/kubeconfig
	I0731 11:15:37.128094 3759274 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16969-3616075/.minikube
	I0731 11:15:37.129962 3759274 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 11:15:37.131739 3759274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:15:37.134218 3759274 config.go:182] Loaded profile config "force-systemd-flag-019462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0731 11:15:37.134420 3759274 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:15:37.173911 3759274 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:15:37.173990 3759274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:15:37.293636 3759274 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-31 11:15:37.281952399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:15:37.293728 3759274 docker.go:294] overlay module found
	I0731 11:15:37.296241 3759274 out.go:177] * Using the docker driver based on user configuration
	I0731 11:15:37.297815 3759274 start.go:298] selected driver: docker
	I0731 11:15:37.297832 3759274 start.go:898] validating driver "docker" against <nil>
	I0731 11:15:37.297843 3759274 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:15:37.300217 3759274 out.go:177] 
	W0731 11:15:37.301867 3759274 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0731 11:15:37.303630 3759274 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-076217 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-076217

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-076217

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-076217

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-076217

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-076217

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-076217

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-076217

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-076217

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-076217

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-076217

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-076217

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-076217" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-076217" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-076217

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-076217"

                                                
                                                
----------------------- debugLogs end: false-076217 [took: 3.900571432s] --------------------------------
helpers_test.go:175: Cleaning up "false-076217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-076217
--- PASS: TestNetworkPlugins/group/false (4.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (127.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-523867 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-523867 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m7.832673351s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (127.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-523867 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b286d0c3-d873-468e-a1b8-a381848d10ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b286d0c3-d873-468e-a1b8-a381848d10ee] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.036141696s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-523867 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-523867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-523867 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-523867 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-523867 --alsologtostderr -v=3: (12.092529571s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-523867 -n old-k8s-version-523867
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-523867 -n old-k8s-version-523867: exit status 7 (71.572761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-523867 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (665.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-523867 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0731 11:20:12.388217 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-523867 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m5.498300083s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-523867 -n old-k8s-version-523867
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (665.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-293457 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0731 11:20:59.296956 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-293457 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (1m4.817902901s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-293457 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0396a7cc-5a7c-4dee-9742-a7aa5477b49b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0396a7cc-5a7c-4dee-9742-a7aa5477b49b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.030204122s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-293457 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-293457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-293457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.079502514s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-293457 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-293457 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-293457 --alsologtostderr -v=3: (12.239361406s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-293457 -n no-preload-293457
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-293457 -n no-preload-293457: exit status 7 (72.558207ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-293457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (344.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-293457 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0731 11:23:28.951919 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 11:24:02.342454 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 11:25:12.387428 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 11:25:59.297403 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 11:26:31.996385 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-293457 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (5m44.529620445s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-293457 -n no-preload-293457
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (344.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-tqsr7" [a718c3b0-dab0-4617-aa67-982ff03ac085] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-tqsr7" [a718c3b0-dab0-4617-aa67-982ff03ac085] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.026021764s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-tqsr7" [a718c3b0-dab0-4617-aa67-982ff03ac085] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010654838s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-293457 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-293457 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-293457 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-293457 -n no-preload-293457
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-293457 -n no-preload-293457: exit status 2 (348.023001ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-293457 -n no-preload-293457
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-293457 -n no-preload-293457: exit status 2 (331.215756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-293457 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-293457 -n no-preload-293457
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-293457 -n no-preload-293457
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-829535 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0731 11:28:28.951770 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-829535 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (1m25.685116135s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-829535 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4744d9f3-6a25-401a-867b-5fc2e737a586] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4744d9f3-6a25-401a-867b-5fc2e737a586] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.032058863s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-829535 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-829535 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-829535 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.162721565s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-829535 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-829535 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-829535 --alsologtostderr -v=3: (12.138105696s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-829535 -n embed-certs-829535
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-829535 -n embed-certs-829535: exit status 7 (81.020802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-829535 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (344.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-829535 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0731 11:30:12.388091 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-829535 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (5m43.773696313s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-829535 -n embed-certs-829535
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (344.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-rdsq4" [564cefec-02e7-4ab9-b4d0-6e8ff28891cd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023344398s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-rdsq4" [564cefec-02e7-4ab9-b4d0-6e8ff28891cd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012303074s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-523867 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-523867 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-523867 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-523867 --alsologtostderr -v=1: (1.019601753s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-523867 -n old-k8s-version-523867
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-523867 -n old-k8s-version-523867: exit status 2 (484.607675ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-523867 -n old-k8s-version-523867
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-523867 -n old-k8s-version-523867: exit status 2 (537.384344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-523867 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-523867 --alsologtostderr -v=1: (1.198659298s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-523867 -n old-k8s-version-523867
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-523867 -n old-k8s-version-523867
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (96.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-072721 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0731 11:30:59.297249 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
E0731 11:31:41.240535 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:31:41.245665 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:31:41.256146 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:31:41.276381 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:31:41.316636 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:31:41.397185 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:31:41.557539 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:31:41.878047 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:31:42.518737 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:31:43.799571 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:31:46.360737 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:31:51.481081 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:32:01.721744 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-072721 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (1m36.383837752s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (96.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-072721 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1c107631-e94c-42f3-9ed5-1d5d9e6d7096] Pending
helpers_test.go:344: "busybox" [1c107631-e94c-42f3-9ed5-1d5d9e6d7096] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1c107631-e94c-42f3-9ed5-1d5d9e6d7096] Running
E0731 11:32:22.202723 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.028605756s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-072721 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-072721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-072721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.076490034s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-072721 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-072721 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-072721 --alsologtostderr -v=3: (12.146288219s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-072721 -n default-k8s-diff-port-072721
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-072721 -n default-k8s-diff-port-072721: exit status 7 (84.841557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-072721 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (359.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-072721 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0731 11:33:03.162982 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:33:28.951757 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 11:33:52.274376 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:33:52.280367 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:33:52.290606 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:33:52.311009 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:33:52.351330 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:33:52.431881 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:33:52.592298 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:33:52.912761 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:33:53.552929 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:33:54.833691 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:33:57.394325 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:34:02.514999 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:34:12.755960 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:34:25.083217 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
E0731 11:34:33.236163 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:34:55.434268 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 11:35:12.388010 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
E0731 11:35:14.196755 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-072721 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (5m59.21929956s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-072721 -n default-k8s-diff-port-072721
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (359.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6l65l" [8a1151e5-efef-4f2b-840f-4ac8a7a4cadc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6l65l" [8a1151e5-efef-4f2b-840f-4ac8a7a4cadc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.030790641s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6l65l" [8a1151e5-efef-4f2b-840f-4ac8a7a4cadc] Running
E0731 11:35:59.297032 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010524689s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-829535 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-829535 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-829535 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-829535 -n embed-certs-829535
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-829535 -n embed-certs-829535: exit status 2 (343.406091ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-829535 -n embed-certs-829535
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-829535 -n embed-certs-829535: exit status 2 (333.172517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-829535 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-829535 -n embed-certs-829535
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-829535 -n embed-certs-829535
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-152139 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0731 11:36:36.116936 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:36:41.240755 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-152139 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (46.022377959s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-152139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-152139 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.174707095s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-152139 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-152139 --alsologtostderr -v=3: (1.265476091s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-152139 -n newest-cni-152139
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-152139 -n newest-cni-152139: exit status 7 (71.022898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-152139 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (43.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-152139 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0731 11:37:08.923993 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/no-preload-293457/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-152139 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (43.625095746s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-152139 -n newest-cni-152139
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (43.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-152139 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-152139 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-152139 -n newest-cni-152139
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-152139 -n newest-cni-152139: exit status 2 (354.909309ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-152139 -n newest-cni-152139
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-152139 -n newest-cni-152139: exit status 2 (353.278205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-152139 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-152139 -n newest-cni-152139
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-152139 -n newest-cni-152139
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0731 11:38:28.951714 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m29.993797568s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6jn92" [6b0d6628-a08a-4ac9-b7b2-c9584ee18248] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6jn92" [6b0d6628-a08a-4ac9-b7b2-c9584ee18248] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.027914253s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6jn92" [6b0d6628-a08a-4ac9-b7b2-c9584ee18248] Running
E0731 11:38:52.274354 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01083731s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-072721 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-072721 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-072721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-072721 -n default-k8s-diff-port-072721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-072721 -n default-k8s-diff-port-072721: exit status 2 (344.525763ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-072721 -n default-k8s-diff-port-072721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-072721 -n default-k8s-diff-port-072721: exit status 2 (343.820771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-072721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-072721 -n default-k8s-diff-port-072721
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-072721 -n default-k8s-diff-port-072721
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (98.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m38.449055921s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (98.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-076217 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-076217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-t5cgk" [02cfd12f-e095-4748-9d53-730e77f4111f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-t5cgk" [02cfd12f-e095-4748-9d53-730e77f4111f] Running
E0731 11:39:19.957639 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.011141088s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-076217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0731 11:40:12.388223 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/addons-315335/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m15.791014393s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-94lf6" [c9cb1fa6-fa86-4c5f-9d41-46469403f0f7] Running
E0731 11:40:42.342776 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/functional-302253/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.058564429s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-076217 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-076217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-zln9d" [8f307b48-15fc-4c2c-824e-207bb3b0223a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-zln9d" [8f307b48-15fc-4c2c-824e-207bb3b0223a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.017198155s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-076217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fgl8n" [663935cf-c9c1-40ab-aeef-57b07c1c871e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.050437051s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-076217 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-076217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-khsqz" [bf0953c3-5032-412f-af04-3cbdb67402c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-khsqz" [bf0953c3-5032-412f-af04-3cbdb67402c1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.012332993s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m2.696492072s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-076217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (46.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0731 11:42:15.512450 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
E0731 11:42:15.517681 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
E0731 11:42:15.527892 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
E0731 11:42:15.548124 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
E0731 11:42:15.588443 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
E0731 11:42:15.668682 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
E0731 11:42:15.829160 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
E0731 11:42:16.149413 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
E0731 11:42:16.789838 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
E0731 11:42:18.070429 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
E0731 11:42:20.631517 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (46.853873849s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (46.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-076217 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-076217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-jh6mk" [fa8f64c2-5054-4ed1-a7df-a24d5640ee03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0731 11:42:25.753408 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-jh6mk" [fa8f64c2-5054-4ed1-a7df-a24d5640ee03] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.010369105s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-076217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0731 11:42:35.994090 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-076217 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-076217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-k9csp" [44e692bb-a765-46d4-a005-9477f07369ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-k9csp" [44e692bb-a765-46d4-a005-9477f07369ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.013302322s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-076217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (75.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m15.54499386s)
--- PASS: TestNetworkPlugins/group/flannel/Start (75.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0731 11:43:28.952371 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/ingress-addon-legacy-947999/client.crt: no such file or directory
E0731 11:43:37.434748 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/default-k8s-diff-port-072721/client.crt: no such file or directory
E0731 11:43:52.273567 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/old-k8s-version-523867/client.crt: no such file or directory
E0731 11:44:15.800927 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/auto-076217/client.crt: no such file or directory
E0731 11:44:15.806387 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/auto-076217/client.crt: no such file or directory
E0731 11:44:15.816715 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/auto-076217/client.crt: no such file or directory
E0731 11:44:15.836989 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/auto-076217/client.crt: no such file or directory
E0731 11:44:15.877229 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/auto-076217/client.crt: no such file or directory
E0731 11:44:15.958102 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/auto-076217/client.crt: no such file or directory
E0731 11:44:16.118821 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/auto-076217/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-076217 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m29.960739994s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6k7d8" [c5c4b40a-5989-4000-85e1-19f51e20da90] Running
E0731 11:44:16.439398 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/auto-076217/client.crt: no such file or directory
E0731 11:44:17.080249 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/auto-076217/client.crt: no such file or directory
E0731 11:44:18.360486 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/auto-076217/client.crt: no such file or directory
E0731 11:44:20.921310 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/auto-076217/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.032392159s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-076217 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-076217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-k2krc" [726a0476-fdb5-40f4-9801-b107da7d27ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-k2krc" [726a0476-fdb5-40f4-9801-b107da7d27ae] Running
E0731 11:44:26.042101 3621403 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16969-3616075/.minikube/profiles/auto-076217/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.011575071s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-076217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-076217 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-076217 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-kz6cl" [7e91d1c6-6652-482d-834c-c9d6c56cd164] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-kz6cl" [7e91d1c6-6652-482d-834c-c9d6c56cd164] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.012019703s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-076217 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-076217 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (28/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-358242 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-358242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-358242
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-440597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-440597
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-076217 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-076217

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-076217

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-076217

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-076217

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-076217

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-076217

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-076217

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-076217

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-076217

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-076217

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-076217

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-076217" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-076217" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-076217

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-076217"

                                                
                                                
----------------------- debugLogs end: kubenet-076217 [took: 3.976996922s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-076217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-076217
--- SKIP: TestNetworkPlugins/group/kubenet (4.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-076217 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-076217" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-076217

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-076217" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-076217"

                                                
                                                
----------------------- debugLogs end: cilium-076217 [took: 5.356396445s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-076217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-076217
--- SKIP: TestNetworkPlugins/group/cilium (5.62s)

                                                
                                    
Copied to clipboard